【问题标题】:Problems using Spark 1.6.2 for Hadoop 2.6.0 in a Hadoop 2.7.1 cluster在 Hadoop 2.7.1 集群中使用 Spark 1.6.2 for Hadoop 2.6.0 的问题
【发布时间】:2018-02-06 12:24:17
【问题描述】:

我可以访问使用 HDP 2.4 安装的 Hadoop 集群,版本 2.7.1。这样的集群已经安装了 Spark,具体来说:

$ cat /usr/hdp/2.4.3.0-227/spark/RELEASE 
Spark 1.6.2.2.4.3.0-227 built for Hadoop 2.7.1.2.4.3.0-227

我正在尝试设置能够远程连接到集群并部署 Spark 作业的“客户端”机器。因此,我需要为上述相同版本安装 Spark 发行版。

首先,我去了官方的 Spark 下载页面,但是 1.6.2 仅适用于 Hadoop 2.6。

然后,我决定下载 Spark 源代码并按照 this 指南构建它。有趣的是 Hadoop“2.6.x 和更高版本 2.x”的 required building profilehadoop-2-6。 IE。如果我自己构建 Spark,我将在 Spark 官方下载页面中获得一个发行版。

因此,我选择了官方为 Hadoop 2.6.0 预构建的 Spark 1.6.2 发行版。

而且它似乎不能正常工作。我已经提交了一个 Python 脚本 - 一个非常简单的只创建 Spark 上下文的脚本 - 并且存在某种问题(仅显示日志的相关部分):

$ ./bin/spark-submit --master yarn --deploy-mode cluster basic.py
...
17/08/28 13:08:29 INFO Client: Requesting a new application from cluster with 8 NodeManagers
17/08/28 13:08:29 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (24576 MB per container)
17/08/28 13:08:29 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
17/08/28 13:08:29 INFO Client: Setting up container launch context for our AM
17/08/28 13:08:29 INFO Client: Setting up the launch environment for our AM container
17/08/28 13:08:29 INFO Client: Preparing resources for our AM container
17/08/28 13:08:36 INFO Client: Uploading resource file:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/spark-assembly-1.6.2-hadoop2.6.0.jar
17/08/28 13:14:40 INFO Client: Uploading resource file:basic.py -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/basic.py
17/08/28 13:14:40 INFO Client: Uploading resource file:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/python/lib/pyspark.zip -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/pyspark.zip
17/08/28 13:14:41 INFO Client: Uploading resource file:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/python/lib/py4j-0.9-src.zip -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/py4j-0.9-src.zip
17/08/28 13:14:42 INFO Client: Uploading resource file:/private/var/folders/cc/p9gx2wnn3dz8g6yf_r4308fm0000gn/T/spark-0d86f1f4-d310-423a-9d2f-90e2ff46f84e/__spark_conf__3704082754178078870.zip -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/__spark_conf__3704082754178078870.zip
17/08/28 13:14:42 INFO SecurityManager: Changing view acls to: frb
17/08/28 13:14:42 INFO SecurityManager: Changing modify acls to: frb
17/08/28 13:14:42 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(frb); users with modify permissions: Set(frb)
17/08/28 13:14:42 INFO Client: Submitting application 66 to ResourceManager
17/08/28 13:14:42 INFO YarnClientImpl: Submitted application application_1495097788339_0066
17/08/28 13:14:48 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED)
17/08/28 13:14:48 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1503918882943
     final status: UNDEFINED
     tracking URL: <host>:8088/proxy/application_1495097788339_0066/
     user: frb
17/08/28 13:14:49 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED)
...
17/08/28 13:14:52 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING)
17/08/28 13:14:52 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 10.95.120.6
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1503918882943
     final status: UNDEFINED
     tracking URL: <host>:8088/proxy/application_1495097788339_0066/
     user: frb
17/08/28 13:14:53 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING)
...
17/08/28 13:14:59 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED)
17/08/28 13:14:59 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1503918882943
     final status: UNDEFINED
     tracking URL: <host>:8088/proxy/application_1495097788339_0066/
     user: frb
17/08/28 13:15:00 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED)
17/08/28 13:15:01 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING)
17/08/28 13:15:01 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 10.95.58.21
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1503918882943
     final status: UNDEFINED
     tracking URL: <host>:8088/proxy/application_1495097788339_0066/
     user: frb
17/08/28 13:15:02 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING)
...
17/08/28 13:15:09 INFO Client: Application report for application_1495097788339_0066 (state: FINISHED)
17/08/28 13:15:09 INFO Client: 
     client token: N/A
     diagnostics: Max number of executor failures (4) reached
     ApplicationMaster host: 10.95.58.21
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1503918882943
     final status: FAILED
     tracking URL: <host>:8088/proxy/application_1495097788339_0066/
     user: frb
Exception in thread "main" org.apache.spark.SparkException: Application application_1495097788339_0066 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/08/28 13:15:09 INFO ShutdownHookManager: Shutdown hook called
17/08/28 13:15:09 INFO ShutdownHookManager: Deleting directory /private/var/folders/cc/p9gx2wnn3dz8g6yf_r4308fm0000gn/T/spark-0d86f1f4-d310-423a-9d2f-90e2ff46f84e

如果我检查此作业的日志,我会看到:

ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server
Traceback (most recent call last):
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 690, in start
    self.socket.connect((self.address, self.port))
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused
Traceback (most recent call last):
  File "basic.py", line 36, in <module>
    sc = SparkContext(conf=conf)
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/pyspark.zip/pyspark/context.py", line 115, in __init__
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/pyspark.zip/pyspark/context.py", line 172, in _do_init
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/pyspark.zip/pyspark/context.py", line 235, in _initialize_context
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 1062, in __call__
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 631, in send_command
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 624, in send_command
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 579, in _get_connection
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 585, in _create_connection
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 697, in start
py4j.protocol.Py4JNetworkError: An error occurred while trying to connect to the Java server
ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server
Traceback (most recent call last):
  File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 690, in start
    self.socket.connect((self.address, self.port))
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused

即未创建 Spark 上下文,运行 Java 网关的 JVM 和运行 Spark 上下文的 Python 驱动程序之间的连接失败。

这肯定与我在客户端机器上安装的 Spark 发行版有关,因为:

  • 我的客户端机器的 Spark 分发被上传到了 clsuter,因此它是使用的;提交时请记住此日志:

    17/08/28 13:08:36 INFO 客户端:上传资源文件:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2 .6.0.jar -> hdfs://:8020/user/frb/.sparkStaging/application_1495097788339_0066/spark-assembly-1.6.2-hadoop2.6.0.jar

  • 在集群内提交时,上述命令同样有效,即使用由安装的 Spark 为 Hadoop 2.7.1.2.4.3.0-227 构建的“Spark 1.6.2.2.4.3.0-227”版本HDP。

知道如何解决这个问题吗?谢谢!

【问题讨论】:

    标签: hadoop apache-spark


    【解决方案1】:

    我终于解决了这个问题:

    • 我在spark-submit 命令中添加了选项--conf spark.yarn.jars,其值为远程Spark 集群中Spark 程序集jar 的位置。这样可以避免上传我安装的客户端 Spark 程序集 jar(这是一个缓慢的过程,并且确实与远程版本不完全匹配)。
    • 我在yarn-site.xml 的客户端添加了hdp.version 属性,其值为远程Hadoop-Spark 集群的HDP 版本。这避免了某些路径中的替换错误,最终显示为我在问题中描述的连接错误。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2014-08-28
      • 1970-01-01
      • 2017-07-10
      • 1970-01-01
      • 1970-01-01
      • 2016-12-25
      • 1970-01-01
      相关资源
      最近更新 更多