【问题标题】:Using spark-submit externally from EMR cluster master从 EMR 集群主服务器外部使用 spark-submit
【发布时间】:2016-10-24 13:53:47
【问题描述】:

我们有一个使用 Spark 1.6.1 在 AWS Elastic MapReduce (EMR) 中运行的 Hadoop 集群。登录集群主服务器并提交 Spark 作业没有问题,但我们希望能够从另一个独立的 EC2 实例提交它们。

另一个“外部”EC2 实例设置了安全组,以允许所有 TCP 流量进出 EMR 实例主从实例。它有一个直接从 Apache 网站下载的 Spark 二进制安装。

将 /etc/hadoop/conf 文件夹从 master 复制到此实例并相应地设置 $HADOOP_CONF_DIR 后,尝试提交 SparkPi 示例时,我遇到了以下权限问题:

$ /usr/local/spark/bin/spark-submit --master yarn --deploy-mode client --class org.apache.spark.examples.SparkPi /usr/local/spark/lib/spark-examples-1.6.1-hadoop2.6.0.jar 
16/06/22 13:58:52 INFO spark.SparkContext: Running Spark version 1.6.1
16/06/22 13:58:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/22 13:58:52 INFO spark.SecurityManager: Changing view acls to: jungd
16/06/22 13:58:52 INFO spark.SecurityManager: Changing modify acls to: jungd
16/06/22 13:58:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions:     Set(jungd); users with modify permissions: Set(jungd)
16/06/22 13:58:52 INFO util.Utils: Successfully started service 'sparkDriver' on port 34757.
16/06/22 13:58:52 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/06/22 13:58:52 INFO Remoting: Starting remoting
16/06/22 13:58:53 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@172.31.61.189:39241]
16/06/22 13:58:53 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 39241.
16/06/22 13:58:53 INFO spark.SparkEnv: Registering MapOutputTracker
16/06/22 13:58:53 INFO spark.SparkEnv: Registering BlockManagerMaster
16/06/22 13:58:53 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-300d738e-d7e4-4ae9-9cfe-4e257a05d456
16/06/22 13:58:53 INFO storage.MemoryStore: MemoryStore started with capacity 511.1 MB
16/06/22 13:58:53 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/06/22 13:58:53 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/06/22 13:58:53 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/06/22 13:58:53 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/06/22 13:58:53 INFO ui.SparkUI: Started SparkUI at http://172.31.61.189:4040
16/06/22 13:58:53 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-5e332986-ae2a-4bde-9ae4-edb4fac5e1d7/httpd-e475fd1b-c5c8-4f31-9699-be89fff4a69c
16/06/22 13:58:53 INFO spark.HttpServer: Starting HTTP Server
16/06/22 13:58:53 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/06/22 13:58:53 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:43525
16/06/22 13:58:53 INFO util.Utils: Successfully started service 'HTTP file server' on port 43525.
16/06/22 13:58:53 INFO spark.SparkContext: Added JAR file:/usr/local/spark/lib/spark-examples-1.6.1-hadoop2.6.0.jar at http://172.31.61.189:43525/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466603933454
16/06/22 13:58:53 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-60-166.ec2.internal/172.31.60.166:8032
16/06/22 13:58:53 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
16/06/22 13:58:53 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (11520 MB per container)
16/06/22 13:58:53 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/06/22 13:58:53 INFO yarn.Client: Setting up container launch context for our AM
16/06/22 13:58:53 INFO yarn.Client: Setting up the launch environment for our AM container
16/06/22 13:58:53 INFO yarn.Client: Preparing resources for our AM container
16/06/22 13:58:54 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.hadoop.security.AccessControlException: Permission denied: user=jungd, access=WRITE, inode="/user/jungd/.sparkStaging/application_1466437015320_0014":hdfs:hadoop:drwxr-xr-x
at         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at     org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at     org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

使用集群部署模式提交没有区别。有问题的用户是“外部”EC2 实例(我们有多个开发人员帐户)上的本地用户,该用户在集群的主服务器或从服务器上不存在(甚至在本地,用户主目录位于 /home,而不是 /用户)。

我不知道发生了什么。非常感谢任何帮助。

【问题讨论】:

  • 更新:如果我创建一个本地“hadoop”用户并以该用户身份运行 spark-submit 或 pyspark,它似乎确实可以按预期工作,尽管这不是我们想要的。

标签: hadoop apache-spark amazon-ec2 emr


【解决方案1】:

从主机以外的机器运行 spark-submit 需要做几件事:

  • 与提交的用户匹配的用户需要在HDFS中创建
    • 例如,使用 Hue 控制台,或直接通过创建 /user/NAME 文件夹并使用主服务器上的 hadoop fs 命令行工具设置权限
  • 外部计算机与集群主从服务器之间的所有必要端口都必须在两个方向上打开(或者,所有 TPC 流量)。
    • 如果在 AWS EC2 EMR 环境中,机器的安全组、主服务器和从服务器可以明确允许来自其他组。

可能还需要在主服务器上创建用户作为 Linux 帐户。

【讨论】:

    猜你喜欢
    • 2019-11-27
    • 2021-05-18
    • 1970-01-01
    • 2022-12-23
    • 2017-06-02
    • 2018-04-22
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多