【问题标题】:Negative Array Size Exception while inserting into Hive Bucketed Table插入 Hive Bucketed 表时出现负数组大小异常
【发布时间】:2016-06-30 06:57:06
【问题描述】:

我正在尝试插入一个 hive 分桶排序表并遇到减速器抛出的负数组大小异常。请在下面找到堆栈跟踪。

Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#3
    at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NegativeArraySizeException
    at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:56)
    at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:46)
    at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.<init>(InMemoryMapOutput.java:63)
    at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:305)
    at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:295)
    at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:514)
    at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:336)
    at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)

而我的表 DDL 是 (为了便于阅读,只显示列的子集。实际 DDL 有 100 列)

CREATE TABLE clustered_sorted_orc( conv_type string,
                                   multi_dim_id int,
                                   multi_key_id int,
                                   advertiser_id bigint,
                                   buy_id bigint,
                                   day timestamp
PARTITIONED BY(job_instance_id int) 
CLUSTERED BY(conv_type) SORTED BY (day) INTO 8 BUCKETS
STORED AS ORC;

插入语句是

FROM not_clustered_orc
INSERT OVERWRITE TABLE clustered_sorted_orc PARTITION(job_instance_id)
SELECT conv_type ,multi_dim_id ,multi_key_id ,advertiser_id,buy_id ,day, job_instance_id

设置了以下配置单元属性

set hive.enforce.bucketing = true;
set hive.exec.dynamic.partition.mode=nonstrict;

这是来自 MergerManagerImpl 的日志 sn-p,如果有帮助,它会指定 ioSortFactor、mergeThreshold 等。

2016-06-30 05:57:20,518 INFO [main] org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: MergerManager: memoryLimit=12828540928, maxSingleShuffleLimit=3207135232, mergeThreshold=8466837504, ioSortFactor=64, memToMemMergeOutputsThreshold=64

我正在使用 CDH 5.7.1、Hive1.1.0、Hadoop 2.6.0。有没有人遇到过类似的问题?非常感谢任何帮助。

【问题讨论】:

  • 如果是一次在插入时间进行排序。

标签: hadoop mapreduce hive hadoop2 cloudera-cdh


【解决方案1】:

设置后就可以使用了

hive.optimize.sort.dynamic.partition=true

【讨论】:

  • 您能否详细说明您的答案。我有同样的问题,但使用 Apache Crunch。
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 2017-05-16
  • 1970-01-01
  • 2023-03-24
  • 1970-01-01
  • 2013-12-02
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多