【问题标题】:Weird Java threads getting-frozen problem奇怪的 Java 线程冻结问题
【发布时间】:2019-02-18 10:22:58
【问题描述】:

Java 应用的日志时间突然从 22:20 跳到次日 00:33。在 22:20 之前或 00:33 之后,应用程序可以正常运行。看起来应用在 22:20 到 00:33 之间被冻结了。

我的应用中有一个线程池。我使用 DelayQueue 来处理超时业务任务(Runnables)。于是我向线程池提交了一个任务来轮询超时业务任务并进行处理

delayFuture = pool.submit(() -> {
    logger.debug("delayed task execution thread: {}", Thread.currentThread().getName());
    while (true) {
        try {
            AbstractTask task = delayQueue.poll(5, TimeUnit.SECONDS);
            if (task != null && !task.isDone()) {
                task.setOvertime(true);
                task.getExecutingThread().interrupt();

                pool.submit(() -> {
                    task.fireTimeout();
                });
            }
        } catch (Exception e) {
            logger.error("delay task error", e);
        }
    }
});

所有业务任务都以固定延迟完成类似的工作。业务任务在执行时,会在线程池的线程中一一完成几个步骤。但是日志显示某些任务在同一步骤中“被阻止”

LinkedBlockingQueue.poll(timeout, unit);

log时间从22:20跳到00:33又恢复正常了,真是奇怪。

2019-02-16 22:20:20,627 INFO  (biz_thread_64:com.some.company.task.AbstractTask.interruptProcess:31) - device some_device sent control cmd response
2019-02-16 22:20:20,647 INFO  (biz_thread_64:com.some.company.task.AbstractTask.interruptProcess:36) - user some_user received device some_device control cmd response

2019-02-16 22:20:25,809 INFO  (biz_thread_64:com.some.company.task.AbstractTask.interruptProcess:38) - device some_device disconnected, cancel device heartbeat: true
// the next line will be loged after LinkedBlockingQueue.poll(timeout, unit);
2019-02-17 00:33:18,751 INFO  (biz_thread_64:com.some.company.task.AbstractTask.interruptProcess:46) - user some_user received device some_device offline message

2019-02-17 00:33:18,747 INFO  (biz_thread_56:com.some.company.task.TaskManager.lambda$null$0:54) - fire task timeout operation: user get device offline notification, devMac=some_device, userToken=some_user
2019-02-17 00:33:18,951 INFO  (biz_thread_56:com.some.company.task.AbstractTask.resetState:174) - device some_device disconnected, cancel device heartbeat: false
2019-02-17 00:33:18,951 INFO  (biz_thread_56:com.some.company.task.AbstractTask.resetState:183) - user some_user disconnected, cancel user heartbeat: true

2019-02-17 00:33:18,962 INFO  (biz_thread_64:com.some.company.task.TaskManager.exec:170) - task is overtime: task=user get device offline notification, devMac=some_device, userToken=some_user

第 3 行和第 4 行之间有一个跳转。但通常两个步骤之间的时间间隔应该小于 4 分钟。应用似乎在 22:20 到 00:33 之间被冻结。

其他一些有用的信息:

  1. 线程池是固定大小的;

  2. 应用程序在 vm 中运行,但另一个 java 应用程序在 22:20 到 00:33 之间有日志,因此 vm 在该时间范围内没有休眠;

  3. 还有另一个固定延迟任务打印每个业务任务状态,但它在 22:20 到 00:33 之间也没有打印任何内容;

  4. 有一个基于 Vertx 的 http 服务器提供 RESTful 服务。当问题发生时,RESTful 服务在请求时无响应,即在向 http 服务器发出 http GET 请求时浏览器被阻止。但是 telnet 对 http 服务器有效。

任何建议将不胜感激。

更新(2019-02-26 18:13)

又发生了!我的应用程序在 2019-02-26 17:30:17,576 冻结并在 2019-02-26 19:40:29,655 恢复,大约 2 小时 10 分 12 秒。上次出现此问题持续了 2 小时 12 分 53 秒。

问题发生时,jstack和jmap命令都执行失败:

[cloud-user@extdev-platform-ug bin]$ ~/jdk1.8.0_192/bin/jstack -F 10914
Attaching to process ID 10914, please wait...
Error attaching to process: sun.jvm.hotspot.debugger.DebuggerException: Can't attach to the process: ptrace(PTRACE_ATTACH, ..) failed for 10914: Operation not permitted
sun.jvm.hotspot.debugger.DebuggerException: sun.jvm.hotspot.debugger.DebuggerException: Can't attach to the process: ptrace(PTRACE_ATTACH, ..) failed for 10914: Operation not permitted
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.execute(LinuxDebuggerLocal.java:163)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.attach(LinuxDebuggerLocal.java:278)
    at sun.jvm.hotspot.HotSpotAgent.attachDebugger(HotSpotAgent.java:671)
    at sun.jvm.hotspot.HotSpotAgent.setupDebuggerLinux(HotSpotAgent.java:611)
    at sun.jvm.hotspot.HotSpotAgent.setupDebugger(HotSpotAgent.java:337)
    at sun.jvm.hotspot.HotSpotAgent.go(HotSpotAgent.java:304)
    at sun.jvm.hotspot.HotSpotAgent.attach(HotSpotAgent.java:140)
    at sun.jvm.hotspot.tools.Tool.start(Tool.java:185)
    at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
    at sun.jvm.hotspot.tools.JStack.main(JStack.java:92)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.tools.jstack.JStack.runJStackTool(JStack.java:140)
    at sun.tools.jstack.JStack.main(JStack.java:106)
Caused by: sun.jvm.hotspot.debugger.DebuggerException: Can't attach to the process: ptrace(PTRACE_ATTACH, ..) failed for 10914: Operation not permitted
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.attach0(Native Method)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.access$100(LinuxDebuggerLocal.java:62)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$1AttachTask.doit(LinuxDebuggerLocal.java:269)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.run(LinuxDebuggerLocal.java:138)

[cloud-user@extdev-platform-ug bin]$ ~/jdk1.8.0_192/bin/jmap -heap 10914
Attaching to process ID 10914, please wait...
Error attaching to process: sun.jvm.hotspot.debugger.DebuggerException: Can't attach to the process: ptrace(PTRACE_ATTACH, ..) failed for 10914: Operation not permitted
sun.jvm.hotspot.debugger.DebuggerException: sun.jvm.hotspot.debugger.DebuggerException: Can't attach to the process: ptrace(PTRACE_ATTACH, ..) failed for 10914: Operation not permitted
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.execute(LinuxDebuggerLocal.java:163)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.attach(LinuxDebuggerLocal.java:278)
    at sun.jvm.hotspot.HotSpotAgent.attachDebugger(HotSpotAgent.java:671)
    at sun.jvm.hotspot.HotSpotAgent.setupDebuggerLinux(HotSpotAgent.java:611)
    at sun.jvm.hotspot.HotSpotAgent.setupDebugger(HotSpotAgent.java:337)
    at sun.jvm.hotspot.HotSpotAgent.go(HotSpotAgent.java:304)
    at sun.jvm.hotspot.HotSpotAgent.attach(HotSpotAgent.java:140)
    at sun.jvm.hotspot.tools.Tool.start(Tool.java:185)
    at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
    at sun.jvm.hotspot.tools.HeapSummary.main(HeapSummary.java:49)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.tools.jmap.JMap.runTool(JMap.java:201)
    at sun.tools.jmap.JMap.main(JMap.java:130)
Caused by: sun.jvm.hotspot.debugger.DebuggerException: Can't attach to the process: ptrace(PTRACE_ATTACH, ..) failed for 10914: Operation not permitted
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.attach0(Native Method)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.access$100(LinuxDebuggerLocal.java:62)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$1AttachTask.doit(LinuxDebuggerLocal.java:269)
    at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.run(LinuxDebuggerLocal.java:138)

问题前的最后一次GC日志

[GC pause (G1 Evacuation Pause) (young), 0.0388874 secs]
   [Parallel Time: 26.9 ms, GC Workers: 2]
      [GC Worker Start (ms): Min: 22597871.6, Avg: 22597871.6, Max: 22597871.7, Diff: 0.0]
      [Ext Root Scanning (ms): Min: 3.8, Avg: 3.9, Max: 4.0, Diff: 0.2, Sum: 7.9]
      [Update RS (ms): Min: 9.8, Avg: 9.8, Max: 9.8, Diff: 0.1, Sum: 19.6]
         [Processed Buffers: Min: 245, Avg: 266.0, Max: 287, Diff: 42, Sum: 532]
      [Scan RS (ms): Min: 2.3, Avg: 2.3, Max: 2.4, Diff: 0.2, Sum: 4.7]
      [Code Root Scanning (ms): Min: 0.2, Avg: 0.7, Max: 1.2, Diff: 1.0, Sum: 1.4]
      [Object Copy (ms): Min: 9.4, Avg: 9.9, Max: 10.5, Diff: 1.1, Sum: 19.8]
      [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
         [Termination Attempts: Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 2]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 0.1]
      [GC Worker Total (ms): Min: 26.7, Avg: 26.7, Max: 26.7, Diff: 0.0, Sum: 53.5]
      [GC Worker End (ms): Min: 22597898.4, Avg: 22597898.4, Max: 22597898.4, Diff: 0.0]
   [Code Root Fixup: 0.0 ms]
   [Code Root Purge: 0.0 ms]
   [Clear CT: 0.9 ms]
   [Other: 11.1 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 8.4 ms]
      [Ref Enq: 0.1 ms]
      [Redirty Cards: 0.2 ms]
      [Humongous Register: 0.0 ms]
      [Humongous Reclaim: 0.0 ms]
      [Free CSet: 0.7 ms]
   [Eden: 604.0M(604.0M)->0.0B(604.0M) Survivors: 10.0M->10.0M Heap: 707.4M(1024.0M)->103.4M(1024.0M)]
 [Times: user=0.06 sys=0.00, real=0.04 secs] 

我的应用使用选项运行

-Xms1g -Xmx1g -server -XX:+PrintGCDetails -verbose:gc -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+HeapDumpOnOutOfMemoryError

top 命令给出

top -H -p 10914

Tasks: 545 total,   0 running,   0 sleeping, 545 stopped,   0 zombie
Cpu0  :  0.7%us,  1.0%sy,  0.0%ni, 97.3%id,  0.0%wa,  0.0%hi,  0.0%si,  1.0%st
Cpu1  :  4.0%us,  0.7%sy,  0.0%ni, 95.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   5991196k total,  5858276k used,   132920k free,   225448k buffers
Swap:  6291448k total,   132044k used,  6159404k free,  3820256k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                 
10914 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:00.10 java                                                                     
10915 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:03.17 java                                                                     
10916 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:00.68 java                                                                     
10917 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:00.71 java                                                                     
10918 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:20.87 java                                                                     
10919 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:00.07 java                                                                     
10920 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:00.13 java                                                                     
10921 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:00.05 java                                                                     
10922 cloud-us  20   0 4151m 1.0g  15m T  0.0 17.3   0:00.16 java

netstat 命令显示有 16 个 TCP 连接,状态为 CLOSE_WAIT 或 ESTABLISHED,根据日志,16 也是链式未正常完成的任务数。

我的应用会定期打印业务线程池中的线程状态日志。问题前的最后一个

    "biz_thread_0":"TIMED_WAITING",
    "biz_thread_1":"TIMED_WAITING",
    "biz_thread_2":"TIMED_WAITING",
    "biz_thread_3":"TIMED_WAITING",
    "biz_thread_4":"TIMED_WAITING",
    "biz_thread_5":"TIMED_WAITING",
    "biz_thread_6":"WAITING",
    "biz_thread_7":"TIMED_WAITING",
    "biz_thread_8":"TIMED_WAITING",
    "biz_thread_9":"TIMED_WAITING",
    "biz_thread_10":"WAITING",
    "biz_thread_11":"TIMED_WAITING",
    "biz_thread_12":"WAITING",
    "biz_thread_13":"TIMED_WAITING",
    "biz_thread_14":"TIMED_WAITING",
    "biz_thread_15":"TIMED_WAITING",
    "biz_thread_16":"TIMED_WAITING",
    "biz_thread_17":"TIMED_WAITING",
    "biz_thread_18":"TIMED_WAITING",
    "biz_thread_19":"TIMED_WAITING",
    "biz_thread_20":"TIMED_WAITING",
    "biz_thread_21":"TIMED_WAITING",
    "biz_thread_22":"TIMED_WAITING",
    "biz_thread_23":"TIMED_WAITING",
    "biz_thread_24":"TIMED_WAITING",
    "biz_thread_25":"TIMED_WAITING",
    "biz_thread_26":"WAITING",
    "biz_thread_27":"WAITING",
    "biz_thread_28":"WAITING",
    "biz_thread_29":"WAITING",
    "biz_thread_30":"WAITING",
    "biz_thread_31":"TIMED_WAITING",
    "biz_thread_32":"TIMED_WAITING",
    "biz_thread_33":"TIMED_WAITING",
    "biz_thread_34":"RUNNABLE",
    "biz_thread_35":"TIMED_WAITING",
    "biz_thread_36":"TIMED_WAITING",
    "biz_thread_37":"WAITING",
    "biz_thread_38":"TIMED_WAITING",
    "biz_thread_39":"TIMED_WAITING",
    "biz_thread_40":"TIMED_WAITING",
    "biz_thread_41":"TIMED_WAITING",
    "biz_thread_42":"TIMED_WAITING",
    "biz_thread_43":"TIMED_WAITING",
    "biz_thread_44":"WAITING",
    "biz_thread_45":"WAITING",
    "biz_thread_46":"TIMED_WAITING",
    "biz_thread_47":"TIMED_WAITING",
    "biz_thread_48":"TIMED_WAITING",
    "biz_thread_49":"TIMED_WAITING",
    "biz_thread_50":"TIMED_WAITING",
    "biz_thread_51":"WAITING",
    "biz_thread_52":"TIMED_WAITING",
    "biz_thread_53":"TIMED_WAITING",
    "biz_thread_54":"TIMED_WAITING",
    "biz_thread_55":"TIMED_WAITING",
    "biz_thread_56":"WAITING",
    "biz_thread_57":"TIMED_WAITING",
    "biz_thread_58":"TIMED_WAITING",
    "biz_thread_59":"TIMED_WAITING",
    "biz_thread_60":"TIMED_WAITING",
    "biz_thread_61":"TIMED_WAITING",
    "biz_thread_62":"TIMED_WAITING",
    "biz_thread_63":"TIMED_WAITING",
    "biz_thread_64":"TIMED_WAITING",
    "biz_thread_65":"WAITING",
    "biz_thread_66":"TIMED_WAITING",
    "biz_thread_67":"WAITING",
    "biz_thread_68":"WAITING",
    "biz_thread_69":"TIMED_WAITING",
    "biz_thread_70":"TIMED_WAITING",
    "biz_thread_71":"TIMED_WAITING",
    "biz_thread_72":"WAITING",
    "biz_thread_73":"TIMED_WAITING",
    "biz_thread_74":"TIMED_WAITING",
    "biz_thread_75":"TIMED_WAITING",
    "biz_thread_76":"WAITING",
    "biz_thread_77":"TIMED_WAITING",
    "biz_thread_78":"TIMED_WAITING",
    "biz_thread_79":"TIMED_WAITING",
    "biz_thread_80":"TIMED_WAITING",
    "biz_thread_81":"TIMED_WAITING",
    "biz_thread_82":"WAITING",
    "biz_thread_83":"TIMED_WAITING",
    "biz_thread_84":"TIMED_WAITING",
    "biz_thread_85":"TIMED_WAITING",
    "biz_thread_86":"TIMED_WAITING",
    "biz_thread_87":"TIMED_WAITING",
    "biz_thread_88":"TIMED_WAITING",
    "biz_thread_89":"WAITING",
    "biz_thread_90":"TIMED_WAITING",
    "biz_thread_91":"TIMED_WAITING",
    "biz_thread_92":"TIMED_WAITING",
    "biz_thread_93":"TIMED_WAITING",
    "biz_thread_94":"WAITING",
    "biz_thread_95":"TIMED_WAITING",
    "biz_thread_96":"TIMED_WAITING",
    "biz_thread_97":"TIMED_WAITING",
    "biz_thread_98":"TIMED_WAITING",
    "biz_thread_99":"TIMED_WAITING",
    "biz_thread_100":"TIMED_WAITING",
    "biz_thread_101":"TIMED_WAITING",
    "biz_thread_102":"TIMED_WAITING",
    "biz_thread_103":"WAITING",
    "biz_thread_114":"WAITING",
    "biz_thread_113":"TIMED_WAITING",
    "biz_thread_112":"WAITING",
    "biz_thread_111":"TIMED_WAITING",
    "biz_thread_110":"TIMED_WAITING",
    "biz_thread_109":"TIMED_WAITING",
    "biz_thread_108":"TIMED_WAITING",
    "biz_thread_107":"TIMED_WAITING",
    "biz_thread_106":"TIMED_WAITING",
    "biz_thread_105":"WAITING",
    "biz_thread_104":"TIMED_WAITING",
    "biz_thread_115":"TIMED_WAITING",
    "biz_thread_116":"TIMED_WAITING",
    "biz_thread_122":"TIMED_WAITING",
    "biz_thread_121":"TIMED_WAITING",
    "biz_thread_120":"TIMED_WAITING",
    "biz_thread_119":"TIMED_WAITING",
    "biz_thread_118":"TIMED_WAITING",
    "biz_thread_117":"TIMED_WAITING",
    "biz_thread_126":"TIMED_WAITING",
    "biz_thread_125":"TIMED_WAITING",
    "biz_thread_124":"WAITING",
    "biz_thread_123":"WAITING",
    "biz_thread_127":"WAITING"

【问题讨论】:

  • 嗨,艾伦,您的计算机在这期间是否进入睡眠状态?您能否在该时间范围内提供更多 Java 或系统事件日志?
  • @DilyanoSenders 计算机没有进入睡眠状态,因为在 22:20 到 00:33 之间有另一个 Java 应用程序有日志。抱歉,我无法提供更多系统事件日志。如果需要,可以提供更多代码。
  • 如果你能提供更多一点就好了,因为如果我看这段代码日志,我会说 JVM 进入休眠状态,其他 java 进程占用了 JVM供使用
  • @DilyanoSenders 另一个 java 进程使用 JDK1.7 运行,而我的应用程序使用 JDK1.8 运行。也许问题是线程池的使用。每个业务任务由几个步骤组成,如果时间过长,所有步骤都会超时。我更新了关于超时任务处理的代码。
  • 但是他们使用的是同一个 JVM 对吧?我的意思是您可以使用不同的 JDK(开发工具包)进行开发,但仍然可以仅在 1 个 JVM 上运行。

标签: java multithreading


【解决方案1】:

终于!

我不知道哪个java选项或其他东西有帮助,但是在nohup stdout重定向文件中有完整的线程转储(第一次发生时没有线程转储)。

2019-02-26 19:40:29
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.192-b12 mixed mode):

"biz_thread_87" #104 prio=5 os_prio=0 tid=0x00007f93e76fd800 nid=0x2b24 waiting for monitor entry [0x00007f938bfbe000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.checkRollover(RollingFileManager.java:217)
    - waiting to lock <0x00000000c06725d8> (a org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager)
    at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:207)
    at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
    at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
    at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
    at org.apache.logging.log4j.core.config.LoggerConfig.logParent(LoggerConfig.java:439)
    at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:434)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
    at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
    at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
    at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2091)
    at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1999)
    at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1868)
    at org.apache.logging.slf4j.Log4jLogger.info(Log4jLogger.java:194)
    at com.some.company.task.impl.StartTask.interruptProcess(StartTask.java:17)
    at com.some.company.common.task.AbstractTask.process(AbstractTask.java:88)
    at com.some.company.common.task.TaskManager.exec(TaskManager.java:174)
    at com.some.company.common.task.Context.lambda$resubmit$0(Context.java:426)
    at com.some.company.common.task.Context$$Lambda$97/519533636.run(Unknown Source)
    at com.some.company.common.task.TaskManager.lambda$delayExec$2(TaskManager.java:121)
    at com.some.company.common.task.TaskManager$$Lambda$37/873634936.run(Unknown Source)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

"nioEventLoopGroup-173-1" #315 prio=10 os_prio=0 tid=0x00007f939420e000 nid=0x2ced runnable [0x00007f937b3e8000]
   java.lang.Thread.State: RUNNABLE
    at java.io.RandomAccessFile.writeBytes(Native Method)
    at java.io.RandomAccessFile.write(RandomAccessFile.java:525)
    at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.writeToDestination(RollingRandomAccessFileManager.java:109)
    - eliminated <0x00000000c06725d8> (a org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager)
    at org.apache.logging.log4j.core.appender.OutputStreamManager.flushBuffer(OutputStreamManager.java:288)
    - eliminated <0x00000000c06725d8> (a org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager)
    at org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager.flush(RollingRandomAccessFileManager.java:128)
    - locked <0x00000000c06725d8> (a org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFileManager)
    at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:179)
    at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170)
    at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161)
    at org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender.append(RollingRandomAccessFileAppender.java:218)
    at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
    at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
    at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
    at org.apache.logging.log4j.core.config.LoggerConfig.logParent(LoggerConfig.java:439)
    at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:434)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
    at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
    at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
    at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2091)
    at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2005)
    at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1876)
    at org.apache.logging.slf4j.Log4jLogger.info(Log4jLogger.java:184)
    at com.some.company.UserWifiClient.sendHeartbeat(UserWifiEppClient.java:39)
    at com.some.company.AbstractNettyClient.lambda$scheduleHeartbeat$0(AbstractNettyClient.java:146)
    at com.some.company.AbstractNettyClient$$Lambda$73/2134853806.run(Unknown Source)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:133)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:495)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)

有 101 个 BLOCKED 线程和 94 个线程“等待锁定 ”。

所以我认为这一定是对log4j Logger的误用。在 AbstractTask 中,“记录器”的创建如下所示

protected final static Logger logger = LoggerFactory.getLogger(AbstractTask.class);

所有 AbstractTask 实现都只使用这个“记录器”,但是以这种方式创建的记录器在多线程尝试使用它时引入了争用。现在我在 AbstractTask 中以这种方式创建记录器

protected Logger logger = LoggerFactory.getLogger(getClass());

那么每个实际任务都应该有自己的记录器对象来使用,并且不会面临竞争条件问题。

看到这个:Should logger be private static or not

多么重要的一课!

顺便问一下,JVM 是否有时会自动将线程转储到标准输出?我在 nohup stdout 重定向文件中找到了上面的“Full thread dump ...”。

这种线程阻塞问题是死锁吗?该应用程序在大约 2 小时后恢复。如果是死锁,怎么恢复?

更新(2019-03-11)

其实在我修改了log4j logger的使用方式后,我又不小心再次遇到了这个问题,即使在应用程序顺利运行一周后。我意识到这可能是由 jmap 引起的。我上周五在命令下运行了几个小时

while [ true ]; do jmap -heap 340; echo "=================================================="; sleep 30; done

然后在 17:00 左右,应用再次冻结!因此,如果您的应用程序运行正常,请尽可能少地针对应用程序 pid 运行 jmapjstack!这些命令将挂起 JVM。当它们执行过于频繁时,JVM可能会被冻结。

【讨论】:

  • 很高兴你把它修好了。仅供参考,您使用的是 log4j1 吗?
  • 不,我使用 log4j 2.8.2 和 slf4j 1.7.25 和 org.apache.logging.log4j:log4j-slf4j-impl:2.8.2。
  • kill -3 命令会将堆栈跟踪发送到标准输出。您很可能在该进程的 pid 上运行它。 RollingFileManager checkRollover 方法将在 this 上同步。 waiting for monitor entry [0x00007f938bfbe000] 行只是说“此线程正在等待进入对象引用 0x00007f938bfbe000 的同步块”。有一个可运行线程拥有对象0x00007f938bfbe000 的锁。我会在线程转储中寻找它。我会认为一个线程正在花时间滚动文件,并且可能需要一段时间。
  • @JohnVint 我没有向进程发送kill -3。实际上- waiting to lock &lt;0x00000000c06725d8&gt; 行显示了线程试图锁定的对象。检查这个:stackoverflow.com/questions/37304897/…
猜你喜欢
  • 2011-03-04
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2013-08-19
  • 2011-10-05
相关资源
最近更新 更多