【发布时间】:2014-02-06 13:02:41
【问题描述】:
我正在运行单节点 hadoop 环境。我有一个 mapreduce 作业来计算某些特定时间段内某些监视信息的平均值,例如每小时平均值。此作业将输出写入 hdfs 内的路径。在运行作业之前,它会被清理 ech 时间。它工作了一个月。昨天,在运行作业时,我收到了来自jobclient的异常,说:
文件 /user/root/out1/_temporary/_attempt_201401141113_0007_r_000000_0/hi/130-r-00000 只能复制到 0 个节点,而不是 1 个节点
完整的堆栈跟踪如下:
..........
14/01/17 12:00:09 INFO mapred.JobClient: map 100% reduce 32%
14/01/17 12:00:12 INFO mapred.JobClient: map 100% reduce 74%
14/01/17 12:00:17 INFO mapred.JobClient: Task Id : attempt_201401141113_0007_r_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/out1/_temporary/_attempt_201401141113_0007_r_000000_0/hi/130-r-00000 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy2.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy2.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
从最初在谷歌上搜索,说存储空间问题。但我不这么认为,因为我的整个输入数据应该小于 600MB 并且大约有 1.5GB 节点上的可用空间。我运行了 hadoop dfsadmin -report 命令,它返回如下:
$hadoop dfsadmin -report
Configured Capacity: 11353194496 (10.57 GB)
Present Capacity: 2354425856 (2.19 GB)
DFS Remaining: 1633726464 (1.52 GB)
DFS Used: 720699392 (687.31 MB)
DFS Used%: 30.61%
Under replicated blocks: 49
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 192.168.1.149:50010
Decommission Status : Normal
Configured Capacity: 11353194496 (10.57 GB)
DFS Used: 720699392 (687.31 MB)
Non DFS Used: 8998768640 (8.38 GB)
DFS Remaining: 1633726464(1.52 GB)
DFS Used%: 6.35%
DFS Remaining%: 14.39%
Last contact: Fri Jan 17 04:36:55 GMT+05:30 2014
请给我一个解决方案。这可能是配置问题吗?我对hadoop配置不太了解。请帮助..
【问题讨论】:
-
这可能无法解决您的问题,但您似乎使用了太多副本。如果你只有一个节点,你的文件应该只有一个副本。这个输出“Under replicated blocks: 49”表示有 49 个块被复制不足,这将是一个问题,因为没有更多的节点可以将它们复制到。