【发布时间】:2013-11-26 13:34:00
【问题描述】:
尝试使用 Streaming 在 Hadoop 上运行 mapreduce 作业。我有两个 ruby 脚本 wcmapper.rb 和 wcreducer.rb。我正在尝试按如下方式运行该作业:
hadoop jar hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar -file wcmapper.rb -mapper wcmapper.rb -file wcreducer.rb -reducer wcreducer.rb -input test.txt -output output
这会导致控制台出现以下错误消息:
13/11/26 12:54:07 INFO streaming.StreamJob: map 0% reduce 0%
13/11/26 12:54:36 INFO streaming.StreamJob: map 100% reduce 100%
13/11/26 12:54:36 INFO streaming.StreamJob: To kill this job, run:
13/11/26 12:54:36 INFO streaming.StreamJob: /home/paul/bin/hadoop-1.2.1/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201311261104_0009
13/11/26 12:54:36 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201311261104_0009
13/11/26 12:54:36 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201311261104_0009_m_000000
13/11/26 12:54:36 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
查看任何任务的失败尝试显示:
java.io.IOException: Cannot run program "/var/lib/hadoop/mapred/local/taskTracker/paul/jobcache/job_201311261104_0010/attempt_201311261104_0010_m_000001_3/work/./wcmapper.rb": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)
我了解 hadoop 需要复制 map 和 reducer 脚本以供所有节点使用,并相信这是 -file 参数的目的。但是,脚本似乎没有被复制到 hadoop 期望找到它们的位置。控制台表明它们正在被打包我认为:
packageJobJar: [wcmapper.rb, wcreducer.rb, /var/lib/hadoop/hadoop-unjar3547645655567272034/] [] /tmp/streamjob3978604690657430710.jar tmpDir=null
我还尝试了以下方法:
hadoop jar hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar -files wcmapper.rb,wcreducer.rb -mapper wcmapper.rb -reducer wcreducer.rb -input test.txt -output output
但这给出了同样的错误。
谁能告诉我问题出在哪里?
或者到哪里寻找更好的诊断问题?
非常感谢
保罗
【问题讨论】: