【问题标题】:Need help running spark-submit in Apache Airflow需要帮助在 Apache Airflow 中运行 spark-submit
【发布时间】:2019-08-28 21:27:02
【问题描述】:

我是 Python 和 Airflow 的一个相对较新的用户,我很难让 spark-submit 在 Airflow 任务中运行。我的目标是让以下 DAG 任务成功运行

from datetime import datetime, timedelta
from airflow import DAG
from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator
from airflow.operators.bash_operator import BashOperator

default_args = {
    'owner': 'matthew',
    'start_date': datetime(2019, 7, 8)
}

dag = DAG('CustomCreate_test2',
          default_args=default_args,
          schedule_interval=timedelta(days=1))

t3 = BashOperator(
    task_id='run_test',
    bash_command='spark-submit --class CLASSPATH.CustomCreate ~/IdeaProjects/custom-create-job/build/libs/custom-create.jar',
    dag=dag
)

我知道问题在于 Airflow 而不是 bash,因为当我在终端中运行命令 spark-submit --class CLASSPATH.CustomCreate ~/IdeaProjects/custom-create-job/build/libs/custom-create.jar 时,它会成功运行。

我从 Airflow 日志中收到以下错误

...
[2019-08-28 15:55:34,750] {bash_operator.py:132} INFO - Command exited with return code 1
[2019-08-28 15:55:34,764] {taskinstance.py:1047} ERROR - Bash command failed
Traceback (most recent call last):
  File "/Users/matcordo2/.virtualenv/airflow/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 922, in _run_raw_task
    result = task_copy.execute(context=context)
  File "/Users/matcordo2/.virtualenv/airflow/lib/python3.7/site-packages/airflow/operators/bash_operator.py", line 136, in execute
    raise AirflowException("Bash command failed")
airflow.exceptions.AirflowException: Bash command failed
...

我也尝试过使用SparkSubmitOperator(...),但没有成功运行它,我只得到如下错误日志

...
[2019-08-28 15:54:49,749] {logging_mixin.py:95} INFO - [[34m2019-08-28 15:54:49,749[0m] {[34mspark_submit_hook.py:[0m427} INFO[0m - at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)[0m
[2019-08-28 15:54:49,803] {taskinstance.py:1047} ERROR - Cannot execute: ['spark-submit', '--master', 'yarn', '--num-executors', '2', '--total-executor-cores', '1', '--executor-cores', '1', '--executor-memory', '2g', '--driver-memory', '1g', '--name', 'CustomCreate', '--class', 'CLASSPATH.CustomCreate', '--verbose', '--queue', 'root.default', '--deploy-mode', 'cluster', '~/IdeaProjects/custom-create-job/build/libs/custom-create.jar']. Error code is: 1.
...

BashOperator(...) 任务中运行spark-submit ... 命令之前,我必须使用SparkSubmitOperator(...) 做些什么吗?

有没有办法直接从SparkSubmitOperator(...) 任务运行我的spark-submit 命令?

我需要对 Airflow 的 Admin->Connections 页面中的spark_default 做些什么吗?

Airflow的Admin->Users页面有什么必须设置的吗? 是否必须设置任何内容以允许 Airflow 运行 spark 或运行由特定用户创建的 jar 文件?如果是,是什么/如何?

【问题讨论】:

  • 虽然我无法预测您的设置问题(听起来像 @ 等 shell 上的 环境变量 / 二进制文件可发现性中的问题987654334@ / zsh),如果您需要解决方法,请参阅this

标签: python bash apache-spark airflow spark-submit


【解决方案1】:

我找到了解决此问题的解决方法。

在 Airflow Admin->Connection 页面中创建一个新的 ssh 连接(或编辑默认连接),如下所示 Airflow SSH Connection Example

如果看不到图片,以下是文本版本
Conn ID:ssh_connection
Conn 类型:SSH
主机:主机 IP 地址
用户名:主机用户名
密码:主机密码
端口
额外:{“key_file”:“/PATH TO HOME DIR/airflow/.ssh/id_rsa”,“allow_host_key_change”:“true”,“no_host_key_check”:“true”}

然后对你的python脚本进行适当的调整

from airflow import DAG
from airflow.contrib.operators.ssh_operator import SSHOperator
from airflow.operators.bash_operator import BashOperator
from datetime import datetime, timedelta

default_args = {
    'owner': 'matthew',
    'start_date': datetime(2019, 8, 28)
}

dag = DAG('custom-create',
          default_args=default_args,
          schedule_interval=timedelta(days=1),
          params={'project_source': '~/IdeaProjects/custom-create-job',
                  'spark_submit': '/usr/local/bin/spark-submit',
                  'classpath': 'CLASSPATH.CustomCreate',
                  'jar_file': 'build/libs/custom-create.jar'}
          )

templated_bash_command = """
    echo 'HOSTNAME: $HOSTNAME' #To check that you are properly connected to the host
    cd {{ params.project_source }}
    {{ params.spark_submit }} --class {{ classpath }} {{ jar_file }}
"""

t1 = SSHOperator(
    task_id="SSH_task",
    ssh_conn_id='ssh_connection',
    command=templated_bash_command,
    dag=dag
)

我希望这个解决方案可以帮助其他可能遇到与我类似的问题的人。

【讨论】:

    【解决方案2】:

    已经回答了一个类似的问题 - StackOverFlow Link

    我想上面的链接会对你有所帮助。

    将来,如果您想在 AWS EMRAZURE 上实现相同的功能,那么您就有了一种安排 Spark 作业的好方法 - @987654322 @

    上述示例 - (AWS EMR)

     <airflow_EMR_task> =cover_open(json.load(open(airflow_home+'/<tasks_json_containing_all_spark_configurations>')))
     <airflow_EMR_task>['Job']['Name'] =  <airflow_EMR_task>['Job']['Name'] + <'optional_postfix'>
    airflow_swperformance_cpu_creator = EmrRunJobFlowOperator(
        task_id='<task_id>',
        job_flow_overrides= <airflow_EMR_task>['Job'],
        aws_conn_id='aws_default',
        emr_conn_id='emr_default',
        retries=1,
        dag=dag
    )
    

    一个简单的 JSON 将是 - (与上面提到的相同的 json 文件)

    {
        "Job": {
            "Name": "<task_name>",
            "LogUri": "<task_log_uri>",
            "ReleaseLabel": "emr-5.6.0",
            "Applications": [
                {
                    "Name": "Spark"
                },
                {
                    "Name": "Hive"
                }
            ],
            "Tags": [
                {
                    "Key" : "<any_tag>",
                    "Value" : "<any_tag>"
                },
                {
                    "Key" : "<any tag>",
                    "Value": "<any_tag>"
                },
                {
                    "Key" : "<any_tag>",
                    "Value": "<any_tag value>"
                }
            ],
            "JobFlowRole": "EMR_EC2_DefaultRole_Stable",
            "ServiceRole": "EMR_DefaultRole",
            "VisibleToAllUsers": true,
            "Configurations": [
                {
                    "Classification": "spark-defaults",
                    "Properties": {
                        "spark.driver.extraJavaOptions":"-XX:+UseParallelGC -XX:+UseParallelOldGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:+ExitOnOutOfMemoryError -Dlog4j.configuration=log4j-custom.properties",
                        "spark.executor.extraJavaOptions":"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:+ExitOnOutOfMemoryError -Dlog4j.configuration=log4j-custom.properties",
                        "spark.scheduler.mode": "FAIR",
                        "spark.eventLog.enabled": "true",
                        "spark.serializer": "org.apache.spark.serializer.KryoSerializer",
                        "spark.sql.orc.filterPushdown": "true",
                        "spark.dynamicAllocation.enabled": "false"
                    },
                    "Configurations": []
                },
                {
                    "Classification": "spark",
                    "Properties": {
                        "maximizeResourceAllocation": "true"
                    },
                    "Configurations": []
                },
                {
                    "Classification": "hive-site",
                    "Properties": {
                        "javax.jdo.option.ConnectionUserName": "<HIVE USERNAME IF ANY>",
                        "javax.jdo.option.ConnectionPassword": "<<hive_connection_password>>",
                        "javax.jdo.option.ConnectionURL": "<Hive_URL_IF_ANY"
                    },
                    "Configurations": []
                },
                {
                    "Classification": "emrfs-site",
                    "Properties": {
                        "fs.s3.serverSideEncryption.kms.keyId": "<<encryption_key>>",
                        "fs.s3.enableServerSideEncryption": "true"
                    },
                    "Configurations": []
                },
                {
                    "Classification":"spark-env",
                    "Configurations":[{
                        "Classification":"export",
                        "Configurations":[],
                        "Properties": {
                            "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                            "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                            "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                            "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                            "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB"
                "S3_BUCKET_NAME":"<S3_bucekt_naem_if_Required>"
                        }
                    }
                    ]}
            ],
            "Instances": {
                "Ec2KeyName": "<ssh_key>",
                "KeepJobFlowAliveWhenNoSteps": false,
                "Ec2SubnetId": "<subnet>",
                "EmrManagedSlaveSecurityGroup": "<security_group>",
                "EmrManagedMasterSecurityGroup": "<security_group_parameter>",
                "AdditionalSlaveSecurityGroups": [
                    "<self_explanatory>"
                ],
                "AdditionalMasterSecurityGroups": [
                    "<self_explanatory>"
                ],
                "InstanceGroups": [
                    {
                        "InstanceCount": 4,
                        "InstanceRole": "CORE",
                        "InstanceType": "r3.xlarge",
                        "Name": "Core instance group - 2"
                    },
                    {
                        "InstanceCount": 1,
                        "InstanceRole": "MASTER",
                        "InstanceType": "r3.xlarge",
                        "Name": "Master instance group - 1"
                    }
                ]
            },
            "BootstrapActions": [],
            "Steps": [
                {
                    "Name": "download-dependencies",
                    "HadoopJarStep": {
                        "Jar": "command-runner.jar",
                        "Args": [
                            "aws",
                            "s3",
                            "cp",
                            "<appropriate_s3_location>",
                            "/home/hadoop",
                            "--recursive"
                        ],
                        "Properties": []
                    },
                    "ActionOnFailure": "TERMINATE_CLUSTER"
                },
                {
                    "Name": "run-script",
                    "HadoopJarStep": {
                        "Jar": "command-runner.jar",
                        "Args": [
                            "sudo",
                            "/bin/sh",
                            "/home/hadoop/pre-executor.sh"
                        ],
                        "Properties": []
                    },
                    "ActionOnFailure": "TERMINATE_CLUSTER"
                },
                {
                    "Name": "spark-submit",
                    "HadoopJarStep": {
                        "Jar": "command-runner.jar",
                        "Args": [
                            "spark-submit",
                            "/home/hadoop/analytics-job.jar",
                "--run-gold-job-only"
                        ],
                        "Properties": []
                    },
                    "ActionOnFailure": "TERMINATE_CLUSTER"
                }
            ]
        }
    }
    

    仅此而已。

    【讨论】:

      【解决方案3】:

      不确定是否仍然相关,但我通过使用文件的绝对路径(不带“~”的完整路径)解决了类似的问题。 不知道为什么,也许气流将“家”视为另一个地方。

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 1970-01-01
        • 2018-07-09
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 2017-05-12
        相关资源
        最近更新 更多