【发布时间】:2015-09-12 23:04:40
【问题描述】:
我正在尝试执行一个包含大约 3000 万数据的 pig 脚本,但出现以下堆空间错误:
> ERROR 2998: Unhandled internal error. Java heap space
>
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2367)
> at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
> at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
> at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
> at java.lang.StringBuilder.append(StringBuilder.java:132)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.shiftStringByTabs(LogicalPlanPrinter.java:223)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:108)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirst(LogicalPlanPrinter.java:102)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirstLP(LogicalPlanPrinter.java:83)
> at org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.visit(LogicalPlanPrinter.java:69)
> at org.apache.pig.newplan.logical.relational.LogicalPlan.getLogicalPlanString(LogicalPlan.java:148)
> at org.apache.pig.newplan.logical.relational.LogicalPlan.getSignature(LogicalPlan.java:133)
> at org.apache.pig.PigServer.execute(PigServer.java:1295)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> ================================================================================
我用 1000 万条数据运行了相同的代码,并且运行良好。
那么我有哪些可能的方法可以避免上述问题?
压缩是否有助于避免堆空间问题?
我试图将代码分成多个片段,但我仍然得到 错误。所以即使我们增加堆内存分配是否保证如果我们对卷执行相同的操作它也将成立 数据?
【问题讨论】:
-
你在哪种模式下运行代码,本地或 mapreduce..只是一个初步的想法..
-
你可以尝试使用mapreduce模式吗...并检查集群模式是否出现同样的问题
标签: hadoop apache-pig heap-memory