【问题标题】:Mapreduce to hbase output stuck at map 100% reduce 100%Mapreduce 到 hbase 输出卡在 map 100% 减少 100%
【发布时间】:2016-11-22 13:30:45
【问题描述】:

我正在运行一个 mapreduce 进程,以便从 hdfs 写入文件并写入 hbase。

我已经简化了流程。这是源代码:

public class WriteHBaseDriver extends Configured implements Tool{

    private static Configuration conf = null;

    public static void main(String[] args){

        int exitCode;
        try {
            exitCode = ToolRunner.run(new Configuration(), new WriteHBaseDriver(), args);
            System.exit(exitCode);
        } catch (Exception e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

    @Override
    public int run(String[] arg0) throws Exception {
        conf = HBaseConfig.getConfiguration();

        Job job = Job.getInstance(conf, WriteHBaseDriver.class.getSimpleName());
            job.setJarByClass(WriteHBaseDriver.class);
            job.setMapperClass(WriteHBaseMapper.class);
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(IntWritable.class);
            job.setOutputFormatClass(TableOutputFormat.class);
            job.setReducerClass(WriteHBaseReducer.class);
            job.setNumReduceTasks(1);
            job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "NAMESPACE_NAME:TABLE_NAME");
            FileInputFormat.addInputPath(job, new Path("/user/myuser/data/input/"));
            job.waitForCompletion(true);

        return 0;


    }

    public class WriteHBaseMapper extends Mapper<LongWritable, Text, Text,IntWritable> {

@Override
public void map(LongWritable offset, Text record, Context context) throws IOException {

        try {
            context.write(new Text("key"), new IntWritable(1));
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}



public class WriteHBaseReducer extends TableReducer<Text, IntWritable, ImmutableBytesWritable>{

    public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException
    {

        Put put = new Put(Bytes.toBytes(new Date().getTime()));
        String family = "M";
        String qualifier = "D";
        put.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes.toBytes("value"));

        context.write(new ImmutableBytesWritable(Bytes.toBytes("NAMESPACE_NAME:TABLE_NAME")), put);
    }
}

集群是最近安装的 cloudera 集群 CHH 5.9.0,有一个主节点和 4 个区域服务器 主节点上只安装了一个zookeeper服务器。

当使用 hadoop jar 运行该进程时,一切似乎都运行良好。 但是当进程处于 map 100% 和 reduce 100% 时,它会卡住并且没有任何内容写入 hbase。

没有显示失败消息,我能找到的唯一错误消息是:

2016-11-21 12:52:23,584 信息 org.apache.hadoop.hbase.zookeeper.MetaTableLocator:失败 验证 hbase:meta,,1 at 地址=mdmtsthfs1.corp.ute.com.uy,60020,1479743178098, 异常=org.apache.hadoop.hbase.NotServingRegionException:区域 hbase:meta,,1 不在线 mdmtsthfs1.corp.ute.com.uy,60020,1479743524683 在 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2921) 在 org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053) 在 org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1333) 在 org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22233) 在 org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) 在 org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) 在 org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185) 在 org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)

甚至不知道它是否与它有关。

我在这里缺少什么?

发现来自 zookeeper 的错误跟踪:

    2016-11-22 15:21:02,882 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-22 15:21:02,883 WARN [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2016-11-22 15:21:02,983 ERROR [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2016-11-22 15:21:02,983 WARN [main] org.apache.hadoop.hbase.zookeeper.ZKUtil: hconnection-0x4e90b4f40x0, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:623)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:479)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:165)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:597)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:577)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:556)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1195)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1179)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1365)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:395)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:163)
    at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:120)
    at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
2016-11-22 15:21:02,984 ERROR [main] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: hconnection-0x4e90b4f40x0, quorum=localhost:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:623)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:479)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:165)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:597)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:577)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:556)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1195)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1179)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1365)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:395)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:163)
    at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:120)
    at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

【问题讨论】:

  • 示例 mapreduce 工作是否有效?检查是否有足够的内存可用
  • 任何写入 hdfs 的 mapreduce 都在工作。 Mapreduce 示例和我们的 mapreduce 流程。
  • 只有尝试写入 hdfs 的会卡在 map 100% reduce 100%
  • 确保使用正确版本的 API 作为集群中运行的 HBase 版本。

标签: hadoop mapreduce hbase cloudera


【解决方案1】:

解决了。

必须设置zookeeper quorum 和zookeeper 客户端端口。

    conf.set("hbase.zookeeper.quorum",<ips list>);
    conf.set("hbase.zookeeper.property.clientPort",<port>);

不确定为什么不直接从 hbase-site 获取此配置。 请注意,该过程在快速启动时运行正常,但它没有在最近安装的集群上运行。 现在,设置这些属性一切都很好。 非常感谢

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2014-09-03
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多