【问题标题】:java.io.IOException: Mkdirs failed to create when running MapReduce jobjava.io.IOException:运行 MapReduce 作业时创建 Mkdirs 失败
【发布时间】:2017-06-01 23:42:01
【问题描述】:

我正在尝试运行一个简单的 MapReduce 作业将数据导入 HBase,但它无法运行,这是错误堆栈跟踪。

Exception in thread "main" java.io.IOException: Mkdirs failed to create /user/SOME_PATH/hbase-staging (exists=false, cwd=file:/Users/SOME_PATH/2ND_PATH/HFileIntoHBase)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
    at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1071)
    at org.apache.hadoop.io.SequenceFile$RecordCompressWriter.<init>(SequenceFile.java:1371)
    at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:272)
    at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:294)
    at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.writePartitions(HFileOutputFormat2.java:335)
    at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configurePartitioner(HFileOutputFormat2.java:596)
    at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:440)
    at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:405)
    at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.configureIncrementalLoad(HFileOutputFormat2.java:367)

这是我的 Java 代码:

public int run(String[] arg0) throws Exception {

        Configuration conf = new Configuration();
        conf.set(MAPRED_JOB_NAME, "steve_test");
        conf.set(HBASE_TABLE, "steve1");
        Job job = new Job(conf, conf.get(MAPRED_JOB_NAME));
        String output_table = conf.get(HBASE_TABLE);

        job.setJarByClass(PutUrlIntoHbase.class);
        job.setMapperClass(PutUrlIntoHbaseMapper.class);
        job.setReducerClass(PutSortReducer.class);

        job.setMapOutputKeyClass(ImmutableBytesWritable.class);
        job.setMapOutputValueClass(Put.class);

        HTable table = new HTable(conf, output_table);
        job.setOutputFormatClass(HFileOutputFormat2.class);
        HFileOutputFormat2.configureIncrementalLoad(job, table);

        if (job.waitForCompletion(true) && job.isSuccessful()) {
            return 0;
        }
        return -1;
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = HBaseConfiguration.create();
        int res = ToolRunner.run(conf, new PutUrlIntoHbase(), args);
        System.exit(res);
    }

正如其他一些类似帖子所建议的: 我已验证我有权在此目录中执行mkdir

我的机器是 Mac OS X:10.11.6

请帮忙!

谢谢!

【问题讨论】:

    标签: java macos hadoop mapreduce hbase


    【解决方案1】:

    这是一个非常老的问题,但如果有人偶然发现它,您可以手动将临时目录设置为某个现有目录,如下所示:

    conf.set("hbase.fs.tmp.dir", "/some/other/staging/directory")

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多