【问题标题】:Spark on yarn mode end with "Exit status: -100. Diagnostics: Container released on a *lost* node"纱线模式上的 Spark 以“退出状态:-100。诊断:容器在 *lost* 节点上释放”结束
【发布时间】:2016-11-04 10:57:35
【问题描述】:

我正在尝试使用最新的 EMR 在 AWS 上加载一个包含 1TB 数据的数据库。而且运行时间很长,甚至 6 个小时都没有完成,但是在运行 6h30m 之后,我收到一些错误消息,宣布 Container 在 lost 节点上释放,然后作业失败。日志是这样的:

16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144178.0 in stage 0.0 (TID 144178, ip-10-0-2-176.ec2.internal): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000006 on host: ip-10-0-2-176.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144181.0 in stage 0.0 (TID 144181, ip-10-0-2-176.ec2.internal): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000006 on host: ip-10-0-2-176.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144175.0 in stage 0.0 (TID 144175, ip-10-0-2-176.ec2.internal): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000006 on host: ip-10-0-2-176.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144213.0 in stage 0.0 (TID 144213, ip-10-0-2-176.ec2.internal): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000006 on host: ip-10-0-2-176.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 INFO scheduler.DAGScheduler: Executor lost: 5 (epoch 0)
16/07/01 22:45:43 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1467389397754_0001_01_000007 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 5 from BlockManagerMaster.
16/07/01 22:45:43 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(5, ip-10-0-2-176.ec2.internal, 43922)
16/07/01 22:45:43 INFO storage.BlockManagerMaster: Removed 5 successfully in removeExecutor
16/07/01 22:45:43 ERROR cluster.YarnClusterScheduler: Lost executor 6 on ip-10-0-2-173.ec2.internal: Container marked as failed: container_1467389397754_0001_01_000007 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 INFO spark.ExecutorAllocationManager: Existing executor 5 has been removed (new total is 41)
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144138.0 in stage 0.0 (TID 144138, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000007 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144185.0 in stage 0.0 (TID 144185, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000007 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144184.0 in stage 0.0 (TID 144184, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000007 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144186.0 in stage 0.0 (TID 144186, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000007 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1467389397754_0001_01_000035 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 INFO scheduler.DAGScheduler: Executor lost: 6 (epoch 0)
16/07/01 22:45:43 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 6 from BlockManagerMaster.
16/07/01 22:45:43 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(6, ip-10-0-2-173.ec2.internal, 43593)
16/07/01 22:45:43 INFO storage.BlockManagerMaster: Removed 6 successfully in removeExecutor
16/07/01 22:45:43 ERROR cluster.YarnClusterScheduler: Lost executor 30 on ip-10-0-2-173.ec2.internal: Container marked as failed: container_1467389397754_0001_01_000035 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144162.0 in stage 0.0 (TID 144162, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 30 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000035 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 INFO spark.ExecutorAllocationManager: Existing executor 6 has been removed (new total is 40)
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144156.0 in stage 0.0 (TID 144156, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 30 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000035 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144170.0 in stage 0.0 (TID 144170, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 30 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000035 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 WARN scheduler.TaskSetManager: Lost task 144169.0 in stage 0.0 (TID 144169, ip-10-0-2-173.ec2.internal): ExecutorLostFailure (executor 30 exited caused by one of the running tasks) Reason: Container marked as failed: container_1467389397754_0001_01_000035 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
16/07/01 22:45:43 INFO scheduler.DAGScheduler: Executor lost: 30 (epoch 0)
16/07/01 22:45:43 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1467389397754_0001_01_000024 on host: ip-10-0-2-173.ec2.internal. Exit status: -100. Diagnostics: Container released on a *lost* node

我很确定我的网络设置可以正常工作,因为我曾尝试在同一环境中的一个小得多的表上运行此脚本。

另外,我知道有人在 6 个月前发布了一个问题,询问相同的问题:spark-job-error-yarnallocator-exit-status-100-diagnostics-container-released,但我仍然需要问,因为没有人回答这个问题。

【问题讨论】:

  • 我遇到了同样的问题。没有答案:(
  • @clay 只是我的猜测。当价格高于您的价格时,Spot 实例将被收回,然后该节点将丢失。因此,如果您正在从事长期工作,请不要使用现场实例。我找到了一种方法将我的数据集拆分为许多小任务,每个任务只运行 5 分钟,并在 s3 上保存一个 reduce 结果,毕竟,从 s3 读取结果并进行另一个 reduce,这样我就可以避免 long运行作业。
  • 我也遇到了这个问题:/
  • 这里有类似的问题(但是有很大的自加入)。现在已经打了一段时间了。资源管理器上的日志只是说容器丢失了。没有迹象表明为什么。内存可能是个问题。
  • 可以分享节点的日志吗?

标签: apache-spark hadoop-yarn emr


【解决方案1】:

看起来其他人也有同样的问题,所以我只是发布答案而不是写评论。我不确定这是否能解决问题,但这应该是一个想法。

如果您使用竞价实例,您应该知道如果价格高于您的输入,竞价实例将被关闭,您将遇到此问题。即使您只是将现场实例用作从站。所以我的解决方案是不使用任何现场实例进行长期运行的工作。

另一个想法是将作业分成许多独立的步骤,这样您就可以将每个步骤的结果保存为 S3 上的文件。如果发生任何错误,只需从缓存文件的那一步开始。

【讨论】:

  • 因此,根据您的解决方案,第一个选择是:获取专用的 CORE 节点而不是 SPOT 任务节点。第二种选择是基本上将您的工作分解为多个工作并以渐进方式手动运行它们?
【解决方案2】:

是动态分配内存吗?我有类似的问题,我通过计算执行程序内存、执行程序核心和执行程序来进行静态分配来解决它。 尝试在 Spark 中为大量工作负载进行静态分配。

【讨论】:

  • 取消持久化未使用的 DF 有帮助吗?在这种情况下??
  • 你可以试试,你是 EMR 还是 Cloudera 堆栈?还要检查纱线调度程序的资源管理是否公平或容量,然后通过传递执行程序的数量等尝试静态内存分配而不是动态分配......
  • 我正在使用 EMR,并且在使用 unpersist 后我也没有发现动态内存变化有任何差异。
  • 我要问你的是通过关闭动态来使用静态内存分配,通过计算传递执行器、执行器内存和执行器核心的数量,而不是将其留给 Spark 动态内存分配
【解决方案3】:

这意味着您的 YARN 容器已关闭,要调试发生的情况,您必须阅读 YARN 日志,使用官方 CLI yarn logs -applicationId 或随意使用并贡献我的项目 https://github.com/ebuildy/yoga 一个 YARN 查看器作为 Web 应用程序。

您应该会看到很多 Worker 错误。

【讨论】:

    【解决方案4】:

    我遇到了同样的问题。我在 DZone 上的这篇文章中找到了一些线索:
    https://dzone.com/articles/some-lessons-of-spark-and-memory-issues-on-emr

    这个问题是通过增加 DataFrame 分区的数量来解决的(在本例中,从 1,024 增加到 2,048)。这减少了每个分区所需的内存。


    所以我尝试增加 DataFrame 分区的数量,这解决了我的问题。

    【讨论】:

      【解决方案5】:
      【解决方案6】:

      亚马逊提供了他们的解决方案,通过资源分配来处理,没有站在用户角度的处理方法

      【讨论】:

      • 正如目前所写,您的答案尚不清楚。请edit 添加其他详细信息,以帮助其他人了解这如何解决所提出的问题。你可以找到更多关于如何写好答案的信息in the help center
      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2017-04-21
      • 2015-10-19
      • 2015-04-30
      • 1970-01-01
      • 1970-01-01
      • 2021-12-23
      相关资源
      最近更新 更多