【问题标题】:hadoop multinode cluster - slave nodes failed to perform mapreduce taskhadoop 多节点集群 - 从节点无法执行 mapreduce 任务
【发布时间】:2014-04-09 22:22:05
【问题描述】:

我是 hadoop 新手。我尝试按照 Michael Noll 的帖子设置 hadoop(版本 1.2.1)集群(1 个主节点和 5 个从节点) http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

在我在集群上运行字数统计作业之前,一切似乎都很好。当我通过在主节点上运行以下 cmd 启动集群时:

hadoop/start-all.sh

jps 输出正确:

在主人身上:

li@master:~$ jps
12839 TaskTracker
11814 NameNode
12535 JobTracker
25131 Jps
12118 DataNode
12421 SecondaryNameNode

在 5 个从节点上:

li@slave1:~/hadoop/logs$ jps
4605 TaskTracker
19407 Jps
4388 DataNode

当我在 master 上运行 stop cmd 时:

hadoop/stop-all.sh

jps 在主节点和从节点上什么都不提供

但是当我在集群上运行字数统计作业时,我认为集群无法正常工作。从节点上的任务日志与 Michael Noll 在他的帖子中得到的不匹配。看来这项工作只在主人身上执行。其他 5 个从节点没有分配 map reduce 任务来执行。以下是一些日志文件:

Master 上的控制台输出:

hadoop jar hadoop-examples-1.2.1.jar wordcount /user/li/gutenberg /user/li/gutenberg-output
14/03/06 17:11:09 INFO input.FileInputFormat: Total input paths to process : 7
14/03/06 17:11:09 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/06 17:11:09 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/06 17:11:10 INFO mapred.JobClient: Running job: job_201402211607_0014
14/03/06 17:11:11 INFO mapred.JobClient:  map 0% reduce 0%
14/03/06 17:11:17 INFO mapred.JobClient:  map 14% reduce 0%
14/03/06 17:11:19 INFO mapred.JobClient:  map 57% reduce 0%
14/03/06 17:11:20 INFO mapred.JobClient:  map 85% reduce 0%
14/03/06 17:11:21 INFO mapred.JobClient:  map 100% reduce 0%
14/03/06 17:11:24 INFO mapred.JobClient:  map 100% reduce 33%
14/03/06 17:11:27 INFO mapred.JobClient:  map 100% reduce 100%
14/03/06 17:11:28 INFO mapred.JobClient: Job complete: job_201402211607_0014
14/03/06 17:11:28 INFO mapred.JobClient: Counters: 30
14/03/06 17:11:28 INFO mapred.JobClient:   Job Counters 
14/03/06 17:11:28 INFO mapred.JobClient:     Launched reduce tasks=1
14/03/06 17:11:28 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=38126
14/03/06 17:11:28 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/06 17:11:28 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/03/06 17:11:28 INFO mapred.JobClient:     Rack-local map tasks=2
14/03/06 17:11:28 INFO mapred.JobClient:     Launched map tasks=7
14/03/06 17:11:28 INFO mapred.JobClient:     Data-local map tasks=5
14/03/06 17:11:28 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=9825
14/03/06 17:11:28 INFO mapred.JobClient:   File Output Format Counters 
14/03/06 17:11:28 INFO mapred.JobClient:     Bytes Written=1412505
14/03/06 17:11:28 INFO mapred.JobClient:   FileSystemCounters
14/03/06 17:11:28 INFO mapred.JobClient:     FILE_BYTES_READ=4462568
14/03/06 17:11:28 INFO mapred.JobClient:     HDFS_BYTES_READ=6950792
14/03/06 17:11:28 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=7810309
14/03/06 17:11:28 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=1412505
14/03/06 17:11:28 INFO mapred.JobClient:   File Input Format Counters 
14/03/06 17:11:28 INFO mapred.JobClient:     Bytes Read=6950001
14/03/06 17:11:28 INFO mapred.JobClient:   Map-Reduce Framework
14/03/06 17:11:28 INFO mapred.JobClient:     Map output materialized bytes=2915072
14/03/06 17:11:28 INFO mapred.JobClient:     Map input records=137146
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce shuffle bytes=2915072
14/03/06 17:11:28 INFO mapred.JobClient:     Spilled Records=507858
14/03/06 17:11:28 INFO mapred.JobClient:     Map output bytes=11435849
14/03/06 17:11:28 INFO mapred.JobClient:     Total committed heap usage (bytes)=1195069440
14/03/06 17:11:28 INFO mapred.JobClient:     CPU time spent (ms)=16520
14/03/06 17:11:28 INFO mapred.JobClient:     Combine input records=1174991
14/03/06 17:11:28 INFO mapred.JobClient:     SPLIT_RAW_BYTES=791
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce input records=201010
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce input groups=128513
14/03/06 17:11:28 INFO mapred.JobClient:     Combine output records=201010
14/03/06 17:11:28 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1252454400
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce output records=128513
14/03/06 17:11:28 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=4080599040
14/03/06 17:11:28 INFO mapred.JobClient:     Map output records=1174991

tasktracker 登录 slave1:

li@slave1:~/hadoop/logs$ cat hadoop-li-tasktracker-slave1.log
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201402211607_0014_m_000003_0 task's state:UNASSIGNED
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201402211607_0014_m_000004_0 task's state:UNASSIGNED
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201402211607_0014_m_000003_0 which needs 1 slots
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201402211607_0014_m_000003_0 which needs 1 slots
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201402211607_0014_m_000004_0 which needs 1 slots
2014-03-06 17:11:46,336 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_201402211607_0014_m_000004_0 which needs 1 slots
2014-03-06 17:11:46,394 INFO org.apache.hadoop.mapred.JobLocalizer: Initializing user li on this TT.
2014-03-06 17:11:46,544 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201402211607_0014_m_-862426792
2014-03-06 17:11:46,544 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201402211607_0014_m_-862426792 spawned.
2014-03-06 17:11:46,545 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201402211607_0014_m_-696634639
2014-03-06 17:11:46,547 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201402211607_0014_m_-696634639 spawned.
2014-03-06 17:11:46,549 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /home/li/hdfstmp/mapred/local/ttprivate/taskTracker/li/jobcache/job_201402211607_0014/attempt_201402211607_0014_m_000003_0/taskjvm.sh
2014-03-06 17:11:46,551 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /home/li/hdfstmp/mapred/local/ttprivate/taskTracker/li/jobcache/job_201402211607_0014/attempt_201402211607_0014_m_000004_0/taskjvm.sh
2014-03-06 17:11:48,382 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201402211607_0014_m_-862426792 given task: attempt_201402211607_0014_m_000003_0
2014-03-06 17:11:48,383 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201402211607_0014_m_-696634639 given task: attempt_201402211607_0014_m_000004_0
2014-03-06 17:11:51,457 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201402211607_0014_m_000004_0 1.0% 
2014-03-06 17:11:51,459 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201402211607_0014_m_000004_0 is done.
2014-03-06 17:11:51,460 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201402211607_0014_m_000004_0  was 217654
2014-03-06 17:11:51,460 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 1
2014-03-06 17:11:51,470 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201402211607_0014_m_000003_0 1.0% 
2014-03-06 17:11:51,472 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201402211607_0014_m_000003_0 is done.
2014-03-06 17:11:51,472 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201402211607_0014_m_000003_0  was 267026
2014-03-06 17:11:51,473 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 2
2014-03-06 17:11:51,628 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201402211607_0014_m_-696634639 exited with exit code 0. Number of tasks it ran: 1
2014-03-06 17:11:51,631 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201402211607_0014_m_-862426792 exited with exit code 0. Number of tasks it ran: 1
2014-03-06 17:11:56,052 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 192.168.1.111:50060, dest: 192.168.1.116:47652, bytes: 267026, op: MAPRED_SHUFFLE, cliID: attempt_201402211607_0014_m_000003_0, duration: 47537998
2014-03-06 17:11:56,076 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 192.168.1.111:50060, dest: 192.168.1.116:47652, bytes: 217654, op: MAPRED_SHUFFLE, cliID: attempt_201402211607_0014_m_000004_0, duration: 15832312
2014-03-06 17:12:02,319 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201402211607_0014
2014-03-06 17:12:02,320 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201402211607_0014 for user-log deletion with retainTimeStamp:1394233922320

tasktracker 登录 slave2:

2014-03-06 17:12:06,293 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201402211607_0014
2014-03-06 17:12:06,293 WARN org.apache.hadoop.mapred.TaskTracker: Unknown job job_201402211607_0014 being deleted.

slave4 和 slave6 具有与 slave1 相同的任务日志。 slave3 的任务日志和 slave2 一样,只有 2 行。

我的问题:

1. Why the 5 slave nodes did not get task assigned?
2. Why slave2,3 have different task logs from slave1,4,6 when I set the same configuration on them
3. Is this a multinode configuration problem? How can I solve it?

【问题讨论】:

  • 尝试为字数提供更大的文件(以 GB 为单位)..

标签: java hadoop configuration cluster-computing


【解决方案1】:

看起来您的任务节点每个都有 2 个地图槽:

2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201402211607_0014_m_000003_0 which needs 1 slots

JobTracker 意识到了这一点,并决定将尽可能多的任务分配到单个节点上,而不是将它们分散到尽可能多的节点上。这可能是出于局部性的原因(以尽量减少网络流量)。

  1. 这就是你有两个空闲节点的原因,因为 5 个任务只能分配到三个节点,只有两个插槽(天花板(5/2.0 = 3))。

  2. 您的日志将根据特定节点上运行的任务而有所不同。因此,当您在集群上运行作业时,预计日志会迅速分散,并且它们不会在各个节点之间均匀分布。

  3. 这种分布不均并不表示存在任何问题;这是您的集群的正常行为。请记住,Hadoop 通常是为批处理工作而设计的,这意味着通常情况下集群被大量使用并运行着许多作业,这样即使您的特定作业没有在所有节点上运行,您也不会得到空闲节点。

最后一点:在这种特殊情况下,您似乎变得不一样了 您遵循的教程中的行为,因为您可能正在运行 在 AWS 上(使用 Elastic MapReduce)。显然 EMR 有一个自定义调度程序 做出这些映射决策(每个分配多少个插槽 节点,以及如何在它们上分配任务)在没有你的情况下自行分配 能够配置它。此答案中的更多详细信息: Hadoop: number of available map slots based on cluster size .

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2016-09-03
    • 2011-08-13
    • 1970-01-01
    相关资源
    最近更新 更多