【问题标题】:Hadoop jar command error for multiple mapper inputs and 1 reducer output (Join 2 values from 2 files)多个映射器输入和 1 个减速器输出的 Hadoop jar 命令错误(连接 2 个文件中的 2 个值)
【发布时间】:2014-10-13 03:18:03
【问题描述】:

这是我加入 2 个数据集的示例程序。 该程序有 2 个映射器和 1 个化简器连接从 2 个不同映射器获得的值,这些映射器具有 2 个不同的文件作为输入。

我在 hadoop jar 命令中遇到错误。

命令:

hadoop jar /home/rahul/Downloads/testjars/datajoin.jar DataJoin /user/rahul/cust.txt /user/rahul/delivery.txt /user/rahul/output

错误:无效的参数个数 Datajoin

它实际上只需要 1 个输入路径和 1 个输出路径,而在我的命令中,我有 2 个输入用于 2 个不同的映射器和 1 个输出。

谁能帮帮我?

代码:

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.MultipleInputs;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class DataJoin {

    public static class TokenizerMapper1 extends Mapper {

        private Text word = new Text();

        public void map(Object key, Text value, Context context)
                throws IOException, InterruptedException {

            String itr[] = value.toString().split("::");
            word.set(itr[0].trim());
            context.write(word, new Text("CD~" + itr[1]));
        }
    }

    public static class TokenizerMapper2 extends Mapper {

        private Text word = new Text();

        public void map(Object key, Text value, Context context)
                throws IOException, InterruptedException {

            String itr[] = value.toString().split("::");
            word.set(itr[0].trim());
            context.write(word, new Text("DD~" + itr[1]));
        }
    }

    public static class IntSumReducer extends Reducer {
        private Text result = new Text();

        public void reduce(Text key, Iterable values, Context context)
                throws IOException, InterruptedException {
            String sum = "";
            for (Text val : values) {
                sum += val.toString();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        String[] otherArgs = new GenericOptionsParser(conf, args)
                .getRemainingArgs();
        if (otherArgs.length != 2) {
            System.err.println("Usage: DataJoin ");
            System.exit(2);
        }
        Job job = new Job(conf, "Data Join");
        job.setJarByClass(DataJoin.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        MultipleInputs.addInputPath(job, new Path(otherArgs[0]),
                TextInputFormat.class, TokenizerMapper1.class);
        MultipleInputs.addInputPath(job, new Path(otherArgs[1]),
                TextInputFormat.class, TokenizerMapper2.class);
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[2]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

【问题讨论】:

  • 格式化您的代码并更好地提问

标签: java mapreduce


【解决方案1】:

您在这部分有错误

if (otherArgs.length != 2) {
   System.err.println("Usage: DataJoin ");
   System.exit(2);
}

您的参数长度为 3。2 个输入和 1 个输出

参数计数从 1,2... 开始,而不是从 0,1....

改成

if (otherArgs.length != 3) {
   System.err.println("Usage: DataJoin ");
   System.exit(0);
}

这解决了你的问题。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2017-02-26
    • 2019-07-13
    • 1970-01-01
    • 1970-01-01
    • 2010-12-23
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多