【发布时间】:2019-03-26 09:21:43
【问题描述】:
我的集群上已经有一个耗时的 map reduce 作业正在运行。当我提交另一个作业时,它卡在以下点,这表明它正在等待当前正在运行的作业完成:
hive> select distinct(circle) from vf_final_table_orc_format1;
Query ID = hduser_20181022153503_335ffd89-1528-49be-b091-21213d702a03
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 10
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1539782606189_0033, Tracking URL = http://secondary:8088/proxy/application_1539782606189_0033/
Kill Command = /home/hduser/hadoop/bin/hadoop job -kill job_1539782606189_0033
我目前正在对 166GB 的数据运行 mapreduce 作业。我的设置包括7 nodes,其中5 是DN with 32GB RAM 和8.7TB HDD,而1 NN 和1 SN 有32 GB RAM 和1.1TB HDD。
为了并行执行作业,我需要调整哪些设置?我目前正在使用hadoop 2.5.2 version.
编辑:现在我的集群在每个节点的 32 GB 中仅消耗 8-10 GB 的 RAM。其他 HIVE 查询,MR 作业被卡住并等待单个作业完成。如何增加内存消耗以促进更多作业并行执行。这是ps 命令的当前输出:
[hduser@secondary ~]$ ps -ef | grep -i runjar | grep -v grep
hduser 110398 1 0 Nov11 ? 00:07:15 /opt/jdk1.8.0_77//bin/java -Dproc_jar -Xmx1000m
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log -Dyarn.home.dir=
-Dyarn.id.str= -Dhadoop.root.logger=INFO,console -Dyarn.root.logger=INFO,console -Dyarn.policy.file=hadoop-policy.xml
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log
-Dyarn.home.dir=/home/hduser/hadoop -Dhadoop.home.dir=/home/hduser/hadoop
-Dhadoop.root.logger=INFO,console
-Dyarn.root.logger=INFO,console
-classpath /home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/share/hadoop/common/lib/*:/home/hduser/hadoop/share/hadoop/common/*:/home/hduser/hadoop/share/hadoop/hdfs:/home/hduser/hadoop/share/hadoop/hdfs/lib/*:/home/hduser/hadoop/share/hadoop/hdfs/*:/home/hduser/hadoop/share/hadoop/yarn/lib/*:/home/hduser/hadoop/share/hadoop/yarn/*:/home/hduser/hadoop/share/hadoop/mapreduce/lib/*:/home/hduser/hadoop/share/hadoop/mapreduce/*:/home/hduser/hadoop/contrib/capacity-scheduler/*.jar:/home/hduser/hadoop/share/hadoop/yarn/*:/home/hduser/hadoop/share/hadoop/yarn/lib/*
org.apache.hadoop.util.RunJar abc.jar def.mydriver2 /raw_data /mr_output/
【问题讨论】:
标签: hadoop hive mapreduce hadoop-yarn hadoop2