【问题标题】:Custom logging configuration overwritten by spark default logging configuration自定义日志配置被 spark 默认日志配置覆盖
【发布时间】:2018-09-17 11:50:46
【问题描述】:

我正在尝试将 spark 日志写入边缘节点上的自定义位置。 但是我的 log4j.properties 文件被 spark2-client/conf/log4j.properties 中的默认集群属性文件覆盖

请帮我解决这个问题。

详情如下:

我正在使用以下版本: Spark 版本 2.1.1.2.6.2.25-1 Scala 版本 2.11.8

下面是我的 spark 提交命令

spark-submit \
--files file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties \
--class com.abc.datalake.ingestion.DataCleansingValidation \
--master yarn --deploy-mode cluster \
--conf spark.executor.memory=12G \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
--conf spark.driver.memory=2g \
--conf salience=no \
--conf spark.executor.instances=10 \
--conf spark.executor.cores=3 \
--conf spark.rule_src_path='adl://abcdadatalakedev.azuredatalakestore.net/Intake/CDCTest/Meta_RV' \
--conf spark.num_of_partition=200 \
--conf 'spark.eventLog.dir=file:///home/abcdadevadmin/spark_jar/logs/' \
adl://abcdadatalakedev.azuredatalakestore.net/Intake/jar/DataValidationFrameWorkBaselineCDC.jar cat_1 

下面是我的属性文件

# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

# Set everything to be logged to the console
log4j.rootCategory=DEBUG, console, FILE
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# User log
log4j.logger.DataValidationFramework=DEBUG,ROLLINGFILE
log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ROLLINGFILE.File=file:///home/abcdadevadmin/spark_jar/logs/log.out
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.appender.ROLLINGFILE.MaxBackupIndex=10
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
log4j.appender.ROLLINGFILE.DatePattern='.'yyyy-MM-dd-HH-mm

以下是 Spark 作业的日志

在下面的日志中,-Dlog4j.configuration 属性设置了两次。 一个选择我的自定义属性文件,另一个是默认集群属性

SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.25-1/spark2/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.25-1/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 14291046      4 -r-x------   1 yarn     hadoop       3635 Apr  6 05:34 ./__spark_conf__/log4j.properties
 14291064      8 -r-x------   1 yarn     hadoop       4221 Apr  6 05:34 ./__spark_conf__/task-log4j.properties
    exec /bin/bash -c "LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx12288m 
 '-Dhdp.version=' 
 '-Detwlogger.component=sparkexecutor' 
 '-DlogFilter.filename=SparkLogFilters.xml' 
 '-Dlog4j.configuration=file:/home/abcdadevadmin/spark_jar/log4j/log4j.properties' 
 '-DpatternGroup.filename=SparkPatternGroups.xml' 
 '-Dlog4jspark.root.logger=INFO,console,RFA,ETW,Anonymizer' 
 '-Dlog4jspark.log.dir=/var/log/sparkapp/\${user.name}' 
 '-Dlog4jspark.log.file=sparkexecutor.log' 
 '-Dlog4j.configuration=file:/usr/hdp/current/spark2-client/conf/log4j.properties' 
 '-Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl' -Djava.io.tmpdir=$PWD/tmp 
 '-Dspark.driver.port=34369' 
 '-Dspark.history.ui.port=18080' 
 '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.16.124.102:34369 --executor-id 9 --hostname wn8-da0001.zu4isz2uwtcuhdu3c5h0tllmhh.cx.internal.cloudapp.net --cores 3 --app-id application_1522782395512_1033 --user-class-path file:$PWD/__app__.jar 1>/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010/stdout 2>/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010/stderr"

我也尝试过使用以下选项,但没有运气!!!

--conf 'spark.executor.extraJavaOptions=Dlog4j.configuration=file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties'
--driver-java-options '-Dlog4j.configuration=file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties' 

【问题讨论】:

    标签: scala apache-spark logging log4j hadoop2


    【解决方案1】:

    如果你使用集群部署模式,你必须在驱动程序和执行程序中指向本地路径,即基目录。

    试试这个:

    --conf 'spark.executor.extraJavaOptions=-Dlog4j.configuration=file:./log4j.properties'
    --conf 'spark.driver.extraJavaOptions=-Dlog4j.configuration=file:./log4j.properties' 
    

    不要忘记广播您的文件:

    --files file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties
    

    【讨论】:

      猜你喜欢
      • 2015-02-26
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2014-04-17
      • 2022-01-27
      • 2019-09-30
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多