【问题标题】:"java.io.IOException: Pass a Delete or a Put" when reading HDFS and storing HBase读取 HDFS 和存储 HBase 时出现“java.io.IOException: Pass a Delete or a Put”
【发布时间】:2014-03-20 04:06:03
【问题描述】:

我已经为这个错误在一周内发疯了。有一个帖子有同样的问题Pass a Delete or a Put error in hbase mapreduce。但这个决议对我来说并不适用。

我的司机:

 Configuration conf = HBaseConfiguration.create();
    Job job;
    try {
        job = new Job(conf, "Training");
        job.setJarByClass(TrainingDriver.class);
        job.setMapperClass(TrainingMapper.class);
        job.setMapOutputKeyClass(LongWritable.class);
        job.setMapOutputValueClass(Text.class);
        FileInputFormat.setInputPaths(job, new Path("my/path"));
        Scan scan = new Scan();
        scan.setCaching(500);        // 1 is the default in Scan, which will be bad for MapReduce jobs
        scan.setCacheBlocks(false);  // don't set to true for MR jobs
        // set other scan attrs
        TableMapReduceUtil.initTableReducerJob(Constants.PREFIX_TABLE,
                TrainingReducer.class, job);
        job.setReducerClass(TrainingReducer.class);
        //job.setNumReduceTasks(1);   // at least one, adjust as required
        try {
            job.waitForCompletion(true);
        } catch (ClassNotFoundException | InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

我的映射器:

public class TrainingMapper extends
    Mapper<LongWritable, Text, LongWritable, Text> {

    public void map(LongWritable key, Text value,
        Context context)
        throws IOException {
    context.write(key, new Text(generateNewText());
}

我的减速器

public class TrainingReducer extends TableReducer<LongWritable,Text,ImmutableBytesWritable>{

    public void reduce(LongWritable key, Iterator<Text> values,Context context)
        throws IOException {
        while (values.hasNext()) {
             try {
                Put put = new Put(Bytes.toBytes(key.toString()));
                put.add("cf1".getBytes(), "c1".getBytes(), values.next().getBytes());
                context.write(null, put);
             } catch (InterruptedException e) {
                 // TODO Auto-generated catch block
                  e.printStackTrace();
             }
       }
   }
 }

你有这方面的经验吗?请告诉我如何解决它。

【问题讨论】:

    标签: java mapreduce hbase hdfs put


    【解决方案1】:

    我自己找到了解决方案。

    在我的 reduce 函数之前插入注解 @Override 并更改 reduce 函数的第二个参数,如下所示: @覆盖 public void reduce(LongWritable key, Iterable values,Context context)

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-11-28
      • 1970-01-01
      • 1970-01-01
      • 2015-11-20
      • 2017-01-11
      • 2022-12-02
      相关资源
      最近更新 更多