本文作者以实际项目遇到的大key问题为线索,场景化地讲述对应的解决方案。
通过本文,您可以了解关于大key基础概念、影响以及遇到大key的具体解决手段,帮助您更好把控缓存的使用场景,从而提升软件系统的稳定性。
以下为本文目录:
(图一)
注意:这里一定要先刷历史数据,再上线代码业务逻辑的修改。防止引发缓存雪崩【3.3 大对象转换存储形式】场景描述:复杂的大对象可以尝试将对象分拆成几个key-value, 使用mGet和mSet操作对应值或者pipeline的形式,最后拼装成需要返回的大对象。这样意义在于可以分散单次操作的压力,将操作压力平摊到多个redis实例中,降低对单个redis的IO影响;实例经验:这里以系统内订单对象为例:订单对象Order基础属性有几十个,如订单号、金额、时间、类型等,除此之外还要包含订单下的商品OrderSub、预售信息PresaleOrder、发票信息OrderInvoice、订单时效OrderPremiseInfo、订单轨迹OrderTrackInfo、订单详细费用OrderFee等信息。那么对于每个订单相关信息,我们可以设置为单独的key,把订单信息和几个相关的关联数据每个按照单独key存储,接着通过mGet方式获取每个信息之后,最后封装成整体Order对象。下面仅展示关键伪代码以mSet和mGet实现:缓存定义:public String refreshHistoryData(){
try {
String key = "historyKey";
Map<String, String> redisInfoMap= redisUtils.hGetAll(key);
if (redisInfoMap.isEmpty()){
return "查询缓存无数据";
}
for (Map.Entry<String, String> entry : redisInfoMap.entrySet()) {
String redisVal = entry.getValue();
String filedKey = entry.getKey();
String newDataRedisKey = "newDataKey"+filedKey;
redisUtils.set(newDataRedisKey,redisVal);
}
return "success";
}catch (Exception e){
LOG.error("refreshHistoryData 异常:",e);
}
return "failed";
}
缓存存储:public enum CacheKeyConstant {
/**
* 订单基础缓存key
*/
REDIS_ORDER_BASE_INFO("ORDER_BASE_INFO"),
/**
* 订单商品缓存key
*/
ORDER_SUB_INFO("ORDER_SUB_INFO"),
/**
* 订单预售信息缓存key
*/
ORDER_PRESALE_INFO("ORDER_PRESALE_INFO"),
/**
* 订单履约信息缓存key
*/
ORDER_PREMISE_INFO("ORDER_PREMISE_INFO"),
/**
* 订单发票信息缓存key
*/
ORDER_INVOICE_INFO("ORDER_INVOICE_INFO"),
/**
* 订单轨迹信息缓存key
*/
ORDER_TRACK_INFO("ORDER_TRACK_INFO"),
/**
* 订单详细费用信息缓存key
*/
ORDER_FEE_INFO("ORDER_FEE_INFO"),
;
/**
* 前缀
*/
private String prefix;
/**
* 项目统一前缀
*/
public static final String COMMON_PREFIX = "XXX";
CacheKeyConstant(String prefix){
this.prefix = prefix;
}
public String getPrefix(String subKey) {
if(StringUtil.isNotEmpty(subKey)){
return COMMON_PREFIX + prefix + "_" + subKey;
}
return COMMON_PREFIX + prefix;
}
public String getPrefix() {
return COMMON_PREFIX + prefix;
}
}
缓存获取:/**
* @description 刷新订单到缓存
* @param order 订单信息
*/
public boolean refreshOrderToCache(Order order){
if(order == null || order.getOrderId() == null){
return ;
}
String orderId = order.getOrderId().toString();
//设置存储缓存数据
Map<String,String> cacheOrderMap = new HashMap<>(16);
cacheOrderMap.put(CacheKeyConstant.ORDER_BASE_INFO.getPrefix(orderId), JSON.toJSONString(buildBaseOrderVo(order)));
cacheOrderMap.put(CacheKeyConstant.ORDER_SUB_INFO.getPrefix(orderId), JSON.toJSONString(order.getCustomerOrderSubs()));
cacheOrderMap.put(CacheKeyConstant.ORDER_PRESALE_INFO.getPrefix(orderId), JSON.toJSONString(order.getPresaleOrderData()));
cacheOrderMap.put(CacheKeyConstant.ORDER_INVOICE_INFO.getPrefix(orderId), JSON.toJSONString(order.getOrderInvoice()));
cacheOrderMap.put(CacheKeyConstant.ORDER_TRACK_INFO.getPrefix(orderId), JSON.toJSONString(order.getOrderTrackInfo()));
cacheOrderMap.put(CacheKeyConstant.ORDER_PREMISE_INFO.getPrefix(orderId), JSON.toJSONString( order.getPresaleOrderData()));
cacheOrderMap.put(CacheKeyConstant.ORDER_FEE_INFO.getPrefix(orderId), JSON.toJSONString(order.getOrderFeeVo()));
superRedisUtils.mSetString(cacheOrderMap);
}
注意:获取缓存的结果跟传入的key的顺序保持对应即可。缓存util方法封装:/**
* @description 通过订单号获取缓存数据
* @param orderId 订单号
* @return Order 订单实体信息
*/
public Order getOrderFromCache(String orderId){
if(StringUtils.isBlank(orderId)){
return null;
}
//定义查询缓存集合key
List<String> queryOrderKey = Arrays.asList(CacheKeyConstant.ORDER_BASE_INFO.getPrefix(orderId),CacheKeyConstant.ORDER_SUB_INFO.getPrefix(orderId),
CacheKeyConstant.ORDER_PRESALE_INFO.getPrefix(orderId),CacheKeyConstant.ORDER_INVOICE_INFO.getPrefix(orderId),CacheKeyConstant.ORDER_TRACK_INFO.getPrefix(orderId),
CacheKeyConstant.ORDER_PREMISE_INFO.getPrefix(orderId),CacheKeyConstant.ORDER_FEE_INFO.getPrefix(orderId));
//查询结果
List<String> result = redisUtils.mGet(queryOrderKey);
//基础信息
if(CollectionUtils.isEmpty(result)){
return null;
}
String[] resultInfo = result.toArray(new String[0]);
//基础信息
if(StringUtils.isBlank(resultInfo[0])){
return null;
}
BaseOrderVo baseOrderVo = JSON.parseObject(resultInfo[0],BaseOrderVo.class);
Order order = coverBaseOrderVoToOrder(baseOrderVo);
//订单商品
if(StringUtils.isNotBlank(resultInfo[1])){
List<OrderSub> orderSubs =JSON.parseObject(result.get(1), new TypeReference<List<OrderSub>>(){});
order.setCustomerOrderSubs(orderSubs);
}
//订单预售
if(StringUtils.isNotBlank(resultInfo[2])){
PresaleOrderData presaleOrderData = JSON.parseObject(resultInfo[2],PresaleOrderData.class);
order.setPresaleOrderData(presaleOrderData);
}
//订单发票
if(StringUtils.isNotBlank(resultInfo[3])){
OrderInvoice orderInvoice = JSON.parseObject(resultInfo[3],OrderInvoice.class);
order.setOrderInvoice(orderInvoice);
}
//订单轨迹
if(StringUtils.isNotBlank(resultInfo[5])){
OrderTrackInfo orderTrackInfo = JSON.parseObject(resultInfo[5],OrderTrackInfo.class);
order.setOrderTrackInfo(orderTrackInfo);
}
//订单履约信息
if(StringUtils.isNotBlank(resultInfo[6])){
List<OrderPremiseInfo> orderPremiseInfos =JSON.parseObject(result.get(6), new TypeReference<List<OrderPremiseInfo>>(){});
order.setPremiseInfos(orderPremiseInfos);
}
//订单费用明细信息
if(StringUtils.isNotBlank(resultInfo[7])){
OrderFeeVo orderFeeVo = JSON.parseObject(resultInfo[7],OrderFeeVo.class);
order.setOrderFeeVo(orderFeeVo);
}
return order;
}
这里附上通过pipeline的util封装,可参考。/**
*
* @description 同时将多个 key-value (域-值)对设置到缓存中。
* @param mappings 需要插入的数据信息
*/
public void mSetString(Map<String, String> mappings) {
CallerInfo callerInfo = Ump.methodReg(UmpKeyConstants.REDIS.REDIS_STATUS_READ_MSET);
try {
redisClient.getClientInstance().mSetString(mappings);
} catch (Exception e) {
Ump.funcError(callerInfo);
}finally {
Ump.methodRegEnd(callerInfo);
}
}
/**
*
* @description 同时将多个key的结果返回。
* @param queryKeys 查询的缓存key集合
*/
public List<String> mGet(List<String> queryKeys) {
CallerInfo callerInfo = Ump.methodReg(UmpKeyConstants.REDIS.REDIS_STATUS_READ_MGET);
try {
return redisClient.getClientInstance().mGet(queryKeys.toArray(new String[0]));
} catch (Exception e) {
Ump.funcError(callerInfo);
}finally {
Ump.methodRegEnd(callerInfo);
}
return new ArrayList<String>(queryKeys.size());
}
注意:Pipeline不建议用来设置缓存值,因为本身不是原子性的操作。【3.4 压缩存储数据】压缩方法结果单个元素时: 四百个元素集合:/**
* @description pipeline放松查询数据
* @param redisKeyList
* @return java.util.List<java.lang.String>
*/
public List<String> getValueByPipeline(List<String> redisKeyList) {
if(CollectionUtils.isEmpty(redisKeyList)){
return null;
}
List<String> resultInfo = new ArrayList<>(redisKeyList);
CallerInfo callerInfo = Ump.methodReg(UmpKeyConstants.REDIS.REDIS_STATUS_READ_GET);
try {
PipelineClient pipelineClient = redisClient.getClientInstance().pipelineClient();
//添加批量查询任务
List<JimFuture> futures = new ArrayList<>();
redisKeyList.forEach(redisKey -> {
futures.add(pipelineClient.get(redisKey.getBytes()));
});
//处理查询结果
pipelineClient.flush();
//可以等待future的返回结果,来判断命令是否成功。
for (JimFuture future : futures) {
resultInfo.add(new String((byte[])future.get()));
}
} catch (Exception e) {
log.error("getValueByPipeline error:",e);
Ump.funcError(callerInfo);
return new ArrayList<>(redisKeyList.size());
}finally {
Ump.methodRegEnd(callerInfo);
}
return resultInfo;
}
四万个元素集合时:
压缩代码样例DefaultOutputStream
public static byte[] compressToByteArray(String text) throws IOException {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
Deflater deflater = new Deflater();
DeflaterOutputStream deflaterOutputStream = new DeflaterOutputStream(outputStream, deflater);
deflaterOutputStream.write(text.getBytes());
deflaterOutputStream.close();
return outputStream.toByteArray();
}
GZIPOutputStreampublic static String decompressFromByteArray(byte[] bytes) throws IOException {
ByteArrayInputStream inputStream = new ByteArrayInputStream(bytes);
Inflater inflater = new Inflater();
InflaterInputStream inflaterInputStream = new InflaterInputStream(inputStream, inflater);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int length;
while ((length = inflaterInputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, length);
}
inflaterInputStream.close();
outputStream.close();
byte[] decompressedData = outputStream.toByteArray();
return new String(decompressedData);
}
public static byte[] compressGzip(String str) {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
GZIPOutputStream gzipOutputStream = null;
try {
gzipOutputStream = new GZIPOutputStream(outputStream);
} catch (IOException e) {
throw new RuntimeException(e);
}
try {
gzipOutputStream.write(str.getBytes("UTF-8"));
} catch (IOException e) {
throw new RuntimeException(e);
}finally {
try {
gzipOutputStream.close();
} catch (IOException e) {
throw new RuntimeException(e);
}
}
return outputStream.toByteArray();
}
ZlibCompresspublic static String decompressGzip(byte[] compressed) throws IOException {
ByteArrayInputStream inputStream = new ByteArrayInputStream(compressed);
GZIPInputStream gzipInputStream = new GZIPInputStream(inputStream);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int length;
while ((length = gzipInputStream.read(buffer)) > 0) {
outputStream.write(buffer, 0, length);
}
gzipInputStream.close();
outputStream.close();
return outputStream.toString("UTF-8");
}
public byte[] zlibCompress(String message) throws Exception {
String chatacter = "UTF-8";
byte[] input = message.getBytes(chatacter);
BigDecimal bigDecimal = BigDecimal.valueOf(0.25f);
BigDecimal length = BigDecimal.valueOf(input.length);
byte[] output = new byte[input.length + 10 + new Double(Math.ceil(Double.parseDouble(bigDecimal.multiply(length).toString()))).intValue()];
Deflater compresser = new Deflater();
compresser.setInput(input);
compresser.finish();
int compressedDataLength = compresser.deflate(output);
compresser.end();
return Arrays.copyOf(output, compressedDataLength);
}
可以看到压缩效率比较好,压缩效率可以从几百kb压缩到几kb内;当然也是看具体场景。不过这里就是最好是避免调用量大的场景使用,毕竟解压和压缩数据量大会比较耗费cpu性能。如果是黄金链路使用,还需要具体配合压测,对比前后接口性能。【3.5 替换存储方案】如果数据量庞大,那么其实本身是不是就不太适合redis这种缓存存储了。可以考虑es或者mongo这种文档式存储结构,存储大的数据格式。四、总结public static String zlibInfCompress(byte[] data) {
String s = null;
Inflater decompresser = new Inflater();
decompresser.reset();
decompresser.setInput(data);
ByteArrayOutputStream o = new ByteArrayOutputStream(data.length);
try {
byte[] buf = new byte[1024];
while (!decompresser.finished()) {
int i = decompresser.inflate(buf);
o.write(buf, 0, i);
}
s = o.toString("UTF-8");
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
o.close();
} catch (IOException e) {
e.printStackTrace();
}
}
decompresser.end();
return s;
}