【问题标题】:Redisson client exception Netty threadsRedisson 客户端异常 Netty 线程
【发布时间】:2020-04-23 10:23:26
【问题描述】:

我的应用程序部署在 openshift 中,它也使用 Redis。白色它大部分时间都有效,我仍然面临与 redisson 相关的问题,这是间歇性的。启动应用程序 url 时的错误跟踪如下:-

org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=0, freeSubscribeConnectionsCounter=value:49:queue:0, freeConnectionsAmount=31, freeConnectionsCounter=value:63:queue:0, freezed=false, freezeReason=null, client=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@1568202974 [redisClient=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], channel=[id: 0xceaf7022, L:/10.103.34.74:32826 ! R:webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com/10.112.17.104:6379], currentCommand=CommandData [promise=RedissonPromise [promise=ImmediateEventExecutor$ImmediatePromise@68b1bc80(failure: java.util.concurrent.CancellationException)], command=(HMSET), params=[redisson:tomcat_session:306A0C0325AD2189A7FDDB695D0755D2, PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), ...], codec=org.redisson.codec.CompositeCodec@25e7216]], command: (HMSET), params: [redisson:tomcat_session:77C4BB9FC4252BFC2C8411F3A4DBB6C9, PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256)] after 3 retry attempts
    org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:872)
    org.redisson.command.CommandAsyncService.access$000(CommandAsyncService.java:97)
    org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:791)
    org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:788)
    io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:502)
    io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:476)
    io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:415)
    io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:540)
    io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:533)
    io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:114)
    io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1018)
    io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:874)
    io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1365)
    io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:716)
    io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708)
    io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:56)
    io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102)
    io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149)
    io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
    io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
    io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
    io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.lang.Thread.run(Thread.java:748)
Root Cause

io.netty.channel.ExtendedClosedChannelException
    io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
Note The full stack trace of the root cause is available in the server logs.

【问题讨论】:

  • 这可能是因为 redis 集群上的负载增加,因为它在多个应用程序之间共享。作为一种解决方法,我确实尝试每次都重新部署,因此会发生连接重置,从而解决了问题。

标签: java redis openshift netty


【解决方案1】:

这可能是因为 redis 集群的负载增加,因为它在多个应用程序之间共享。作为一种解决方法,我每次看到这种情况时都尝试重新部署,因此会发生连接重置,从而解决了问题。 正如我所说,这只是一种解决方法,永久的解决方案可能是为您的应用程序拥有一个专用的 redis 集群,这又取决于架构、应用程序的大小。

【讨论】:

    【解决方案2】:

    您需要将您的 redisson 版本更新到 3.16.3 以检查更新的异常。所以按照这个,你需要增加你的连接池大小。

            private void checkWriteFuture(ChannelFuture future, RPromise<R> attemptPromise, RedisConnection connection) {
        if (future.isCancelled() || attemptPromise.isDone()) {
            return;
        }
    
        if (!future.isSuccess()) {
            exception = new WriteRedisConnectionException(
                    "Unable to write command into connection! Increase connection pool size. Node source: " + source + ", connection: " + connection +
                            ", command: " + LogHelper.toString(command, params)
                            + " after " + attempt + " retry attempts", future.cause());
            if (attempt == attempts) {
                attemptPromise.tryFailure(exception);
            }
            return;
        }
    
        timeout.cancel();
    
        scheduleResponseTimeout(attemptPromise, connection);
    }
    

    【讨论】:

      猜你喜欢
      • 2017-04-08
      • 2019-05-04
      • 2019-03-01
      • 1970-01-01
      • 2012-03-19
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多