【问题标题】:Can't start H2O on Hadoop Cluster - ClassNotFound Exception无法在 Hadoop 集群上启动 H2O - ClassNotFound 异常
【发布时间】:2018-03-23 23:24:39
【问题描述】:

我正在尝试在 Hadoop 集群上启动 H2O。遗憾的是它不起作用,并给我一个错误,即找不到类 water.hadoop.h2omapper。

Hadoop环境是2.6版本的HDP,包括5个节点,其中1个运行YARN资源管理器,3个节点是YARN客户端的数据节点。每个数据节点都有 32GB RAM 和 4 个 CPU 内核的资源。没有其他应用程序在其上运行。我在 Ambari 的每个节点上为每个 YARN 应用程序配置了最多 16GB 和 3 个内核。

我从终端启动 H2O 集群(在所有节点上都尝试过,到处都是相同的错误),输出如下:

[root@host3 h2o-3.14.0.6-hdp2.6]# sudo -u hdfs hadoop jar h2odriver.jar -nodes 3 -mapperXmx 6g -output h2o-test
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 192.168.20.35]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 192.168.20.35:46619
(You can override these with -driverif and -driverport/-driverportrange.)
Memory Settings:
mapreduce.map.java.opts:     -Xms6g -Xmx6g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Dlog4j.defaultInitOverride=true
Extra memory percent:        10
mapreduce.map.memory.mb:     6758
17/10/13 07:49:14 INFO client.RMProxy: Connecting to ResourceManager at host2/192.168.20.34:8050
17/10/13 07:49:14 INFO client.AHSProxy: Connecting to Application History server at host2/192.168.20.34:10200
17/10/13 07:49:15 WARN mapreduce.JobResourceUploader: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).

17/10/13 07:49:15 INFO mapreduce.JobSubmitter: number of splits:3
17/10/13 07:49:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1507793796947_0002
17/10/13 07:49:15 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
17/10/13 07:49:15 INFO impl.YarnClientImpl: Submitted application application_1507793796947_0002
17/10/13 07:49:15 INFO mapreduce.Job: The url to track the job: http://host2:8088/proxy/application_1507793796947_0002/
Job name 'H2O_86929' submitted
JobTracker job ID is 'job_1507793796947_0002'
For YARN users, logs command is 'yarn logs -applicationId application_1507793796947_0002'
Waiting for H2O cluster to come up...
17/10/13 07:49:29 INFO client.RMProxy: Connecting to ResourceManager at host2/192.168.20.34:8050
17/10/13 07:49:29 INFO client.AHSProxy: Connecting to Application History server at host2/192.168.20.34:10200

----- YARN cluster metrics -----
Number of YARN worker nodes: 3

----- Nodes -----
Node: http://host5:8042 Rack: /default-rack, RUNNING, 1 containers used, 4,0 / 16,0 GB used, 1 / 3 vcores used
Node: http://host4:8042 Rack: /default-rack, RUNNING, 0 containers used, 0,0 / 16,0 GB used, 0 / 3 vcores used
Node: http://host3:8042 Rack: /default-rack, RUNNING, 0 containers used, 0,0 / 16,0 GB used, 0 / 3 vcores used

----- Queues -----
Queue name:            default
Queue state:       RUNNING
Current capacity:  0,11
Capacity:          1,00
Maximum capacity:  1,00
Application count: 1
----- Applications in this queue -----
Application ID:                  application_1507793796947_0002 (H2O_86929)
    Started:                     hdfs (Fri Oct 13 07:49:15 CEST 2017)
    Application state:           FINISHED
    Tracking URL:                http://host2:8088/proxy/application_1507793796947_0002/
    Queue name:                  default
    Used/Reserved containers:    1 / 0
    Needed/Used/Reserved memory: 4,0 GB / 4,0 GB / 0,0 GB
    Needed/Used/Reserved vcores: 1 / 1 / 0

Queue 'default' approximate utilization: 4,0 / 48,0 GB used, 1 / 9 vcores used

----------------------------------------------------------------------

ERROR: Unable to start any H2O nodes; please contact your YARN administrator.

   A common cause for this is the requested container size (6,6 GB)
   exceeds the following YARN settings:

       yarn.nodemanager.resource.memory-mb
       yarn.scheduler.maximum-allocation-mb

Yarn 应用系统日志中对应的错误条目:

2017-10-13 07:49:24,505 FATAL [IPC Server handler 1 on 40503] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1507793796947_0002_m_000002_0 - exited : java.lang.RuntimeException: java.lang.ClassNotFoundException: Class water.hadoop.h2omapper not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2241)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.lang.ClassNotFoundException: Class water.hadoop.h2omapper not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2147)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2239)
... 8 more

2017-10-13 07:49:24,506 INFO [IPC Server handler 1 on 40503] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1507793796947_0002_m_000002_0: Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class water.hadoop.h2omapper not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2241)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.lang.ClassNotFoundException: Class water.hadoop.h2omapper not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2147)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2239)
... 8 more

2017-10-13 07:49:24,507 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1507793796947_0002_m_000002_0: Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class water.hadoop.h2omapper not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2241)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:745)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.lang.ClassNotFoundException: Class water.hadoop.h2omapper not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2147)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2239)
... 8 more

完整日志可在here获取。

任何帮助将不胜感激。

最好的问候, 马库斯

【问题讨论】:

  • 对不起,这里没有足够的信息来说明任何有意义的事情。尝试详细描述您的环境,包括所有命令行命令、完整输出、所有内容的版本以及任何纱线应用程序日志。
  • @TomKraljevic 我用更多信息更新了我的帖子。希望对您有所帮助。

标签: hadoop hadoop-yarn h2o


【解决方案1】:

您是否能够运行其他 YARN 作业? (如 pi 示例。)

应用程序 ID 显示您的 Hadoop 集群于 2017 年 10 月 12 日星期四 07:36:36 UTC(即昨天)启动,这是该集群尝试运行的第一个(也是第二个)作业。

另外,集群中节点的大小真的非常小。

在我看来,所有这一切都像是您正在尝试成为自己的 Hadoop 管理员,但还没有让它发挥作用。 :)

继续尝试,当您的集群配置正确时,H2O 将运行。

【讨论】:

    【解决方案2】:

    根据你提供的部分日志,下面这行有助于理解一点:

    2017-10-12 07:45:02,172 FATAL [IPC Server handler 1 on 39365] 
     org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
      attempt_1507726330188_0001_m_000002_0 - exited : 
        java.lang.RuntimeException: java.lang.ClassNotFoundException: Class water.hadoop.h2omapper not found
    

    很明显,任务 ID #2 实例 #0(或第一个实例)由于缺少类而失败。因此,如果任务 #2 显示错误,则意味着另一个任务 #1 已经在运行。这也意味着作业已经开始或处于运行状态。所以问题发生在作业执行阶段。这意味着执行此特定任务的 DataNode 无法找到 H2O 驱动程序库。因此,可能由于某种原因,节点或节点上的文件系统不可用。如果您研究您的详细日志,您将能够了解为什么会发生这种情况。

    [其他详细信息]

    由于权限或文件系统问题,任何映射器都无法访问 h2odriver.jar,这就是您的作业未启动的原因。您应该确保正确修复 Hadoop 权限和可访问性,以便您可以在没有 root 且需要超级用户别名的情况下启动任何“hadoop”命令。

    【讨论】:

    • 嘿,我用更多信息更新了我的帖子。查看 YARN 资源管理器显示该作业从未运行或成功,所以我认为还有另一个问题。我认为数据节点的主机可能无法读取 .jar 文件,但我不知道如何解决。
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2019-05-07
    • 2018-11-23
    • 2016-07-12
    • 2014-05-12
    • 1970-01-01
    • 1970-01-01
    • 2013-11-07
    相关资源
    最近更新 更多