【问题标题】:Hadoop MapReduce error: Mkdirs failed to create file; job failedHadoop MapReduce 错误:Mkdirs 未能创建文件;作业失败
【发布时间】:2016-03-12 14:13:04
【问题描述】:

我正在尝试在 Hadoop 上执行 C4.5 算法。但是,我遇到了问题并且陷入了以下错误。我拥有所有权限。谁能帮帮我?

Java.lang.Exception: java.io.IOException: Mkdirs failed to create file:/usr/local/hadoop/1/output10/_temporary/0/_temporary/attempt_local960306821_0001_r_000000_0 (exists=false, cwd=file:/home/brina/workspace/C4.5Hadoop)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.io.IOException: Mkdirs failed to create file:/usr/local/hadoop/1/output10/_temporary/0/_temporary/attempt_local960306821_0001_r_000000_0 (exists=false, cwd=file:/home/brina/workspace/C4.5Hadoop)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:442)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:428)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:801)
    at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
    at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.<init>(ReduceTask.java:484)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:414)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
2016-03-12 19:08:04,332 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1386)) - Job job_local960306821_0001 failed with state FAILED due to: NA
2016-03-12 19:08:04,492 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1391)) - Counters: 33
    File System Counters
        FILE: Number of bytes read=523
        FILE: Number of bytes written=249822
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Map-Reduce Framework
        Map input records=14
        Map output records=56
        Map output bytes=863
        Map output materialized bytes=981
        Input split bytes=93
        Combine input records=0
        Combine output records=0
        Reduce input groups=0
        Reduce shuffle bytes=981
        Reduce input records=0
        Reduce output records=0
        Spilled Records=56
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=0
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0
        Total committed heap usage (bytes)=188743680
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=374
    File Output Format Counters 
        Bytes Written=0



Exception in thread "main" java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
    at C45.run(C45.java:192)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at C45.main(C45.java:53)

【问题讨论】:

  • 你有节点的权限吗?这似乎是本地的,而不是在 HDFS 中。
  • 它成功了!非常感谢! :)
  • 我更改了我的文件权限,它成功了! :)

标签: java eclipse hadoop mapreduce


【解决方案1】:

(从 cmets 复制,如果其他人遇到此问题)

基于日志行

Mkdirs failed to create file:/usr/local/hadoop/1/output10/_temporary/0/_temporary/attempt_local960306821_0001_r_000000_0 (exists=false, cwd=file:/home/brina/workspace/C4.5Hadoop)

问题不在于 HDFS,而在于本地文件系统。因此,您需要调整在节点上写入的权限。

【讨论】:

  • 我也有同样的问题 java.io.IOException: Mkdirs failed to create file:/home/hdfs/ukdata_march_april/output/_temporary/0/_temporary/attempt_201704241443_0000_m_000000_6 (exists=false, cwd=file :/yarn/nm/usercache/hdfs/appcache/application_1491878697968_0252/container_1491878697968_0252_01_000002 不知道怎么回事..
【解决方案2】:

我也遇到过这个问题,我通过以下方式解决了这个问题

$sudo chown -R user:group /usr

它可以创建文件

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2021-07-20
    • 1970-01-01
    • 2016-10-15
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多