【发布时间】:2016-07-19 19:14:22
【问题描述】:
我正在尝试在 Oozie (CDH 5.7) 中安排 pyspark 作业,但它一直在抛出错误。请在下面找到我的工作流程。
我已将 .py 脚本放在本地路径和 hdfs 路径中。如果我需要修改任何内容,请告诉我。
错误: [org.apache.oozie.action.hadoop.SparkMain],退出代码[1]
<workflow-app name="Spark_on_Oozie" xmlns="uri:oozie:workflow:0.5">
<global>
<configuration>
<property>
<name>oozie.launcher.yarn.app.mapreduce.am.env</name>
<value>SPARK_HOME=/usr/lib/spark</value>
</property>
</configuration>
</global>
<start to="spark-9fa1"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="spark-9fa1">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<master>yarn-cluster</master>
<mode>client</mode>
<name>spak_job</name>
<class>clear</class>
<jar>/home/cloudera/DQ_FRAMEWORK/oozie/application/app_amlmkte_dq/wf_prc_l1_dq/dq_prc_ini_com_spark.py</jar>
</spark>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
Pyspark Job (Only Put the small code to test):
if __name__ == "__main__":
sc = SparkContext(appName="Aml Markets DQ")
sqlContext=HiveContext(sc)
dt1=datetime.datetime.now()
dq_exec_start_tm=('%02d%02d%02d%02d%02d%02d%d'%(dt1.year,dt1.month,dt1.day,dt1.hour,dt1.minute,dt1.second,dt1.microsecond))[:-4]
#dq_batch_start_id=app_nm +'_'+('%02d%02d%02d%02d%02d%02d%d'%(dt.year,dt.month,dt.day,dt.hour,dt.minute,dt.second,dt.microsecond))[:-4]
# Command Line Arguement
【问题讨论】:
标签: apache-spark pyspark oozie