【问题标题】:Property spark.yarn.jars - how to deal with it?属性 spark.yarn.jars - 如何处理它?
【发布时间】:2018-04-11 21:03:08
【问题描述】:

我对 Spark 的了解有限,阅读此问题后您会有所感悟。我只有一个节点,上面安装了spark、hadoop和yarn。

我能够通过以下命令在集群模式下编写和运行字数统计问题

 spark-submit --class com.sanjeevd.sparksimple.wordcount.JobRunner 
              --master yarn 
              --deploy-mode cluster
              --driver-memory=2g
              --executor-memory 2g
              --executor-cores 1
              --num-executors 1
              SparkSimple-0.0.1SNAPSHOT.jar                                 
              hdfs://sanjeevd.br:9000/user/spark-test/word-count/input
              hdfs://sanjeevd.br:9000/user/spark-test/word-count/output

效果很好。

现在我明白'spark on yarn' 需要集群上可用的 spark jar 文件,如果我什么都不做,那么每次我运行我的程序时,它都会将数百个 jar 文件从 $SPARK_HOME 复制到每个节点(在我的情况只是一个节点)。我看到代码的执行在完成复制之前暂停了一段时间。见下文-

16/12/12 17:24:03 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
16/12/12 17:24:06 INFO yarn.Client: Uploading resource file:/tmp/spark-a6cc0d6e-45f9-4712-8bac-fb363d6992f2/__spark_libs__11112433502351931.zip -> hdfs://sanjeevd.br:9000/user/sanjeevd/.sparkStaging/application_1481592214176_0001/__spark_libs__11112433502351931.zip
16/12/12 17:24:08 INFO yarn.Client: Uploading resource file:/home/sanjeevd/personal/Spark-Simple/target/SparkSimple-0.0.1-SNAPSHOT.jar -> hdfs://sanjeevd.br:9000/user/sanjeevd/.sparkStaging/application_1481592214176_0001/SparkSimple-0.0.1-SNAPSHOT.jar
16/12/12 17:24:08 INFO yarn.Client: Uploading resource file:/tmp/spark-a6cc0d6e-45f9-4712-8bac-fb363d6992f2/__spark_conf__6716604236006329155.zip -> hdfs://sanjeevd.br:9000/user/sanjeevd/.sparkStaging/application_1481592214176_0001/__spark_conf__.zip

Spark 的文档建议设置spark.yarn.jars 属性以避免这种复制。所以我在spark-defaults.conf文件中设置了下面的属性。

spark.yarn.jars hdfs://sanjeevd.br:9000//user/spark/share/lib

http://spark.apache.org/docs/latest/running-on-yarn.html#preparations 要使 Spark 运行时 jar 可以从 YARN 端访问,您可以指定 spark.yarn.archive 或 spark.yarn.jars。有关详细信息,请参阅 Spark 属性。如果 spark.yarn.archive 和 spark.yarn.jars 都没有指定,Spark 将创建一个包含 $SPARK_HOME/jars 下所有 jar 的 zip 文件,并将其上传到分布式缓存。

顺便说一句,我拥有从 LOCAL /opt/spark/jars 到 HDFS /user/spark/share/lib 的所有 jar 文件。他们有 206 个。

这使我的 jar 失败。以下是错误-

spark-submit --class com.sanjeevd.sparksimple.wordcount.JobRunner --master yarn --deploy-mode cluster --driver-memory=2g --executor-memory 2g --executor-cores 1 --num-executors 1 SparkSimple-0.0.1-SNAPSHOT.jar hdfs://sanjeevd.br:9000/user/spark-test/word-count/input hdfs://sanjeevd.br:9000/user/spark-test/word-count/output
16/12/12 17:43:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/12 17:43:07 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/12/12 17:43:07 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
16/12/12 17:43:07 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
16/12/12 17:43:07 INFO yarn.Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead
16/12/12 17:43:07 INFO yarn.Client: Setting up container launch context for our AM
16/12/12 17:43:07 INFO yarn.Client: Setting up the launch environment for our AM container
16/12/12 17:43:07 INFO yarn.Client: Preparing resources for our AM container
16/12/12 17:43:07 INFO yarn.Client: Uploading resource file:/home/sanjeevd/personal/Spark-Simple/target/SparkSimple-0.0.1-SNAPSHOT.jar -> hdfs://sanjeevd.br:9000/user/sanjeevd/.sparkStaging/application_1481592214176_0005/SparkSimple-0.0.1-SNAPSHOT.jar
16/12/12 17:43:07 INFO yarn.Client: Uploading resource file:/tmp/spark-fae6a5ad-65d9-4b64-9ba6-65da1310ae9f/__spark_conf__7881471844385719101.zip -> hdfs://sanjeevd.br:9000/user/sanjeevd/.sparkStaging/application_1481592214176_0005/__spark_conf__.zip
16/12/12 17:43:08 INFO spark.SecurityManager: Changing view acls to: sanjeevd
16/12/12 17:43:08 INFO spark.SecurityManager: Changing modify acls to: sanjeevd
16/12/12 17:43:08 INFO spark.SecurityManager: Changing view acls groups to: 
16/12/12 17:43:08 INFO spark.SecurityManager: Changing modify acls groups to: 
16/12/12 17:43:08 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(sanjeevd); groups with view permissions: Set(); users  with modify permissions: Set(sanjeevd); groups with modify permissions: Set()
16/12/12 17:43:08 INFO yarn.Client: Submitting application application_1481592214176_0005 to ResourceManager
16/12/12 17:43:08 INFO impl.YarnClientImpl: Submitted application application_1481592214176_0005
16/12/12 17:43:09 INFO yarn.Client: Application report for application_1481592214176_0005 (state: ACCEPTED)
16/12/12 17:43:09 INFO yarn.Client: 
 client token: N/A
 diagnostics: N/A
 ApplicationMaster host: N/A
 ApplicationMaster RPC port: -1
 queue: default
 start time: 1481593388442
 final status: UNDEFINED
 tracking URL: http://sanjeevd.br:8088/proxy/application_1481592214176_0005/
 user: sanjeevd
16/12/12 17:43:10 INFO yarn.Client: Application report for application_1481592214176_0005 (state: FAILED)
16/12/12 17:43:10 INFO yarn.Client: 
 client token: N/A
 diagnostics: Application application_1481592214176_0005 failed 1 times due to AM Container for appattempt_1481592214176_0005_000001 exited with  exitCode: 1
For more detailed output, check application tracking page:http://sanjeevd.br:8088/cluster/app/application_1481592214176_0005Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1481592214176_0005_01_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1481593388442
     final status: FAILED
     tracking URL: http://sanjeevd.br:8088/cluster/app/application_1481592214176_0005
     user: sanjeevd
16/12/12 17:43:10 INFO yarn.Client: Deleting staging directory hdfs://sanjeevd.br:9000/user/sanjeevd/.sparkStaging/application_1481592214176_0005
Exception in thread "main" org.apache.spark.SparkException: Application application_1481592214176_0005 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1132)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1175)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/12/12 17:43:10 INFO util.ShutdownHookManager: Shutdown hook called
16/12/12 17:43:10 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-fae6a5ad-65d9-4b64-9ba6-65da1310ae9f

你知道我做错了什么吗?任务日志如下 -

Error: Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster

我理解找不到 ApplicationMaster 类的错误,但我的问题是为什么找不到它 - 这个类应该在哪里?我没有组装 jar,因为我使用的是 spark 2.0.1,其中没有捆绑任何组装。

这与spark.yarn.jars 属性有什么关系?这个属性是为了帮助 spark 在纱线上运行,应该就是这样。使用spark.yarn.jars时我还需要做什么?

感谢您阅读此问题并提前提供帮助。

【问题讨论】:

  • 嗨 Sanjeev,在我的例子中,只有 $SPAKR_HOME$/jars 中的 jars 被复制了。你如何制作自己的jar,即SparkSimple-0.0.1SNAPSHOT.jar 也复制到hdfs?

标签: apache-spark


【解决方案1】:

您还可以使用spark.yarn.archive 选项并将其设置为存档(您创建)的位置,该存档包含$SPARK_HOME/jars/ 文件夹中的所有JAR,位于存档的根级别。例如:

  1. 创建存档:jar cv0f spark-libs.jar -C $SPARK_HOME/jars/ .
  2. 上传到HDFS:hdfs dfs -put spark-libs.jar /some/path/.
    2a.对于大型集群,增加 Spark 存档的复制计数,以减少 NodeManager 执行远程复制的次数。 hdfs dfs –setrep -w 10 hdfs:///some/path/spark-libs.jar(根据 NodeManager 总数按比例更改副本数量)
  3. spark.yarn.archive 设置为hdfs:///some/path/spark-libs.jar

【讨论】:

    【解决方案2】:

    我终于能够理解这个属性了。我通过 hit-n-trial 发现这个属性的正确语法是

    spark.yarn.jars=hdfs://xx:9000/user/spark/share/lib/*.jar

    我没有把*.jar放在最后,我的路径只是以/lib结束。我尝试像这样放置实际的组装罐 - spark.yarn.jars=hdfs://sanjeevd.brickred:9000/user/spark/share/lib/spark-yarn_2.11-2.0.1.jar 但没有运气。就是说无法加载ApplicationMaster。

    我在https://stackoverflow.com/a/41179608/2332121 上发布了对某人提出的类似问题的回复

    【讨论】:

    • 这是“spark-defaults.conf”的一部分,jar 文件已经在 hdfs 中可用了吗?
    【解决方案3】:

    如果您查看 spark.yarn.jars 文档,它会显示以下内容

    包含要分发到 YARN 容器的 Spark 代码的库列表。默认情况下,YARN 上的 Spark 将使用本地安装的 Spark jar,但 Spark jar 也可以位于 HDFS 上的世界可读位置。这允许 YARN 将其缓存在节点上,这样就不需要在每次应用程序运行时分发它。例如,要指向 HDFS 上的 jar,将此配置设置为 hdfs:///some/path。允许使用 Glob。

    这意味着您实际上是在覆盖 SPARK_HOME/jars 并告诉 yarn 从您的路径中获取应用程序运行所需的所有 jars,如果您设置 spark.yarn.jars 属性,则 spark 运行的所有依赖 jars应该存在于此路径中,如果您查看 SPARK_HOME/lib 中存在的 spark-assembly.jar ,则存在 org.apache.spark.deploy.yarn.ApplicationMaster 类,因此请确保所有 spark 依赖项都存在于您指定为 spark 的 HDFS 路径中.yarn.jars。

    【讨论】:

    • 谢谢!我在最后修改了我的问题。由于我使用的是 Spark 2.0.1,没有捆绑任何程序集 jar。所以我找不到 ApplicationMaster java 类。为什么我取消设置 spark.yarn.jars 属性时 spark 不抱怨?当我将所有 /spark/jars 上传到 HDFS 并设置 spark.yarn.jars 属性指向这个 HDFS 位置时,Spark 发疯了并要求 ApplicationMaster。顺便说一句,我也没有 /spark/lib 文件夹。我猜他们也在 2.x 版本中改变了它。请提供任何帮助。
    • 从 Spark 2.X 开始,他们已经停止创建程序集 jar,如果你查看 /jars 文件夹,你会发现 spark-yarn_-.jar 应该包含 ApplicationMaster 类,验证你的 /jars 文件夹中是否有这个 jar。如果您拥有它并且已将其复制到 HDFS 位置,那么我不知道您为什么会收到此错误。 :)
    • 感谢您的帮助。我提高了你的评论;看起来我在指定这个属性时遇到了语法问题。
    猜你喜欢
    • 1970-01-01
    • 2011-10-23
    • 2012-09-15
    • 1970-01-01
    • 2012-08-03
    • 2019-08-17
    • 2020-08-15
    • 2020-01-08
    • 1970-01-01
    相关资源
    最近更新 更多