【发布时间】:2016-01-03 07:43:16
【问题描述】:
我设置ParallelGCThreads=1 并使用G1 GC,所有其他JVM 设置都是默认设置。我在 Spark-1.5.1 上运行 PageRank,有两个 EC2 节点,每个节点 100 GB 堆。
我的堆使用图如下(红色区域:年轻代,黑色区域:老年代)。所有的young GC都很小,突然有一个young GC收集了60GB,然后young GC又变小了。我的 GC 日志显示没有混合 GC、没有完整 GC、一个并发标记和数十个年轻 GC。我想知道为什么会发生那个巨大的年轻 GC?
下面是我的 GC 日志的一部分。巨大的年轻GC是具有“Heap:84.1G”的那个
2015-12-30T06:59:02.488+0000: 245.088: [GC pause (young) 245.089: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 1727, predicted base time: 24.64 ms, remaining time: 175.36 ms, target pause time: 200.00 ms]
245.089: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 206 regions, survivors: 3 regions, predicted young region time: 148.87 ms]
245.089: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 206 regions, survivors: 3 regions, old: 0 regions, predicted pause time: 173.51 ms, target pause time: 200.00 ms]
2015-12-30T06:59:02.531+0000: 245.131: [SoftReference, 0 refs, 0.0000520 secs]2015-12-30T06:59:02.531+0000: 245.131: [WeakReference, 21 refs, 0.0000160 secs]2015-12-30T06:59:02.531+0000: 245.131: [FinalReference, 9759 refs, 0.0084720 secs]2015-12-30T06:59:02.539+0000: 245.140: [PhantomReference, 0 refs, 14 refs, 0.0000190 secs]2015-12-30T06:59:02.539+0000: 245.140: [JNI Weak Reference, 0.0000130 secs] 245.142: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 12.51 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)]
, 0.0534140 secs]
[Parallel Time: 42.3 ms, GC Workers: 1]
[GC Worker Start (ms): 245088.6]
[Ext Root Scanning (ms): 14.4]
[Update RS (ms): 1.9]
[Processed Buffers: 34]
[Scan RS (ms): 0.4]
[Code Root Scanning (ms): 0.0]
[Object Copy (ms): 25.5]
[Termination (ms): 0.0]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 42.3]
[GC Worker End (ms): 245130.9]
[Code Root Fixup: 0.0 ms]
[Code Root Migration: 0.0 ms]
[Clear CT: 1.6 ms]
[Other: 9.5 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 8.6 ms]
[Ref Enq: 0.2 ms]
[Free CSet: 0.4 ms]
[Eden: 6592.0M(6592.0M)->0.0B(58.8G) Survivors: 96.0M->128.0M Heap: 30.6G(100.0G)->24.2G(100.0G)]
[Times: user=0.05 sys=0.00, real=0.06 secs]
2015-12-30T06:59:43.451+0000: 286.051: [GC pause (young) 286.054: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 392599, predicted base time: 367.03 ms, remaining time: 0.00 ms, target pause time: 200.00 ms]
286.054: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 1884 regions, survivors: 4 regions, predicted young region time: 150.18 ms]
286.054: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 1884 regions, survivors: 4 regions, old: 0 regions, predicted pause time: 517.21 ms, target pause time: 200.00 ms]
2015-12-30T06:59:47.767+0000: 290.368: [SoftReference, 0 refs, 0.0000570 secs]2015-12-30T06:59:47.768+0000: 290.368: [WeakReference, 350 refs, 0.0000640 secs]2015-12-30T06:59:47.768+0000: 290.368: [FinalReference, 99336 refs, 0.3781120 secs]2015-12-30T06:59:48.146+0000: 290.746: [PhantomReference, 0 refs, 1 refs, 0.0000290 secs]2015-12-30T06:59:48.146+0000: 290.746: [JNI Weak Reference, 0.0000140 secs] 290.767: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 11.74 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)]
, 4.7153740 secs]
[Parallel Time: 4313.9 ms, GC Workers: 1]
[GC Worker Start (ms): 286053.9]
[Ext Root Scanning (ms): 15.2]
[Update RS (ms): 86.3]
[Processed Buffers: 1557]
[Scan RS (ms): 4.1]
[Code Root Scanning (ms): 0.2]
[Object Copy (ms): 4208.1]
[Termination (ms): 0.0]
[GC Worker Other (ms): 0.0]
[GC Worker Total (ms): 4313.9]
[GC Worker End (ms): 290367.8]
[Code Root Fixup: 0.0 ms]
[Code Root Migration: 0.3 ms]
[Clear CT: 15.1 ms]
[Other: 386.0 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 378.4 ms]
[Ref Enq: 1.7 ms]
[Free CSet: 3.3 ms]
[Eden: 58.9G(58.8G)->0.0B(3456.0M) Survivors: 128.0M->1664.0M Heap: 84.1G(100.0G)->26.7G(100.0G)]
[Times: user=0.78 sys=3.94, real=4.71 secs]
【问题讨论】:
-
为什么只有一个 GC worker?你有多少个 CPU?
-
您能发布您的 GC 选项吗?
-
@PeterLawrey 谢谢!我只有一名 GC 工作人员,因为我正在尝试研究 GC 行为。我有 16 个 CPU,每个 8 个内核
-
@the8472 谢谢!我的 GC 选项:“spark.executor.extraJavaOptions=-XX:+PrintFlagsFinal -XX:+PrintReferenceGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark -XX:+UseG1GC -XX:ParallelGCThreads=1
标签: java apache-spark garbage-collection jvm g1gc