【问题标题】:pyspark on EMR, should spark.executor.pyspark.memory and executor.memory be set?EMR 上的 pyspark,应该设置 spark.executor.pyspark.memory 和 executor.memory 吗?
【发布时间】:2019-08-05 17:39:03
【问题描述】:

我正在经历一项繁重的重新分区工作(因为数据量很大,而不是因为 spark 正在做的事情),并且我一直在运行各种内存错误。

我对 spark 完全陌生,经过几次研究,我终于找到了一种让执行运行得非常快的方法,但是如果我尝试重新分区的表太大,我会遇到另一个内存错误。

我目前在 EMR 中配置它的方式如下,对于 9 r3.8xlarge:

--executor-cores 11 --executor-memory 180G

我的问题是,我也应该设置--conf spark.executor.pyspark.memory 吗? 如果是,是哪个值?应该和executor-memory一样吗?

我无法确认以下内容,但我有一种感觉,当将两者设置为相同的值时,它会因 java 堆错误而崩溃(因此我认为它试图提供过多的 RAM)

正如评论所问,我从 EMR 收到的最新错误是:

diagnostics: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1564757608574
     final status: FAILED
     tracking URL: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004
     user: hadoop
19/08/03 09:44:57 ERROR Client: Application diagnostics message: Application application_1564657600123_0004 failed 2 times due to AM Container for appattempt_1564657600123_0004_000002 exited with  exitCode: -104
Failing this attempt.Diagnostics: Container [pid=80943,containerID=container_1564657600123_0004_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 5.1 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1564657600123_0004_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 81090 81021 80943 80943 (python) 331 403 1709522944 15143 python emr_interim_aad_ds_conversions_4.py 
    |- 81021 80943 80943 80943 (java) 313384 5420 3625050112 345744 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.deploy.PythonRunner --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 80943 80941 80943 80943 (bash) 1 1 115879936 668 /bin/bash -c LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native" /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.deploy.PythonRunner' --primary-py-file emr_interim_aad_ds_conversions_4.py --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1564657600123_0004/container_1564657600123_0004_02_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1564657600123_0004/container_1564657600123_0004_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
For more detailed output, check the application tracking page: http://ip-172-31-35-191.eu-west-1.compute.internal:8088/cluster/app/application_1564657600123_0004 Then click on links to logs of each attempt.
. Failing the application.
Exception in thread "main" org.apache.spark.SparkException: Application application_1564657600123_0004 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1148)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1525)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/08/03 09:44:57 INFO ShutdownHookManager: Shutdown hook called
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-245f0132-a6e5-4a6d-874f-a71942b1636f
19/08/03 09:44:57 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-db12c471-c7d5-4c86-8cda-bf3246ffb860
Command exiting with ret '1'

【问题讨论】:

  • 可以在这里提供错误日志吗?
  • 我已将其添加到帖子中

标签: apache-spark pyspark amazon-emr


【解决方案1】:

尽量增加内存,下面是 pyspark 脚本中的配置示例。 你可以这样调整你的记忆。

    conf = SparkConf()
    conf.set('spark.dynamicAllocation.enabled', 'false')
    conf.set('spark.yarn.am.memory', '4g')  # As the log showing, you need to increase your AM memory.
    conf.set('spark.yarn.am.cores', '2')
    conf.set('spark.executor.memoryOverhead', '1200')  # The amount of off-heap memory (in megabytes) to be allocated per executor. Overhead Memory is used by container itself.
    conf.set('spark.executor.memory', '2500m')  # memory * instances should less than Node total memory
    conf.set('spark.executor.cores', '4')  # --executor-cores
    conf.set('spark.executor.instances', '8')  # --num-executors

顺便说一句,重新分区是一个繁重的运算符。如有必要,您可以不使用 shuffle 使用 coalese。

【讨论】:

  • 非常感谢!所以不需要 spark.executor.pyspark.memory 变量?
  • 这是我的配置,至于你的 EMR,你有很多资源,我想你可以为 executor.memory 设置超过 200G。而且10G以下的memoryOverhead就足够了。和 100G 用于 am.memory。继续调整资源。
  • 太好了,谢谢!如果我没有其他问题,我会尝试几次并验证您的答案(应该都很好)
  • 我认为你不需要设置spark.executor.pyspark,这是用来限制执行器中的python内存的。查看spark.apache.org/docs/latest/configuration.html了解更多详情。
  • 这并没有回答原始问题,即使它可能已经解决了这个特定问题。特别是这个答案无法解决开销内存和 pyspark 内存之间的区别
猜你喜欢
  • 2019-01-07
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2012-04-09
  • 2015-10-05
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多