【问题标题】:org.apache.spark.SparkException: Job aborted due to stage failure: Task 98 in stage 11.0 failed 4 timesorg.apache.spark.SparkException:作业因阶段失败而中止:阶段 11.0 中的任务 98 失败 4 次
【发布时间】:2019-08-29 03:02:44
【问题描述】:

我正在使用 Google Cloud Dataproc 执行 spark 工作,我的编辑器是 Zepplin。我试图将 json 数据写入 gcp 存储桶。在我尝试 10MB 文件之前它成功了。但 10GB 文件失败。我的 dataproc 有 1 个带有 4CPU、26GB 内存、500GB 磁盘的主机。 5 个具有相同配置的工人。我想它应该能够处理 10GB 的数据。

我的命令是toDatabase.repartition(10).write.json("gs://mypath")

错误是

org.apache.spark.SparkException: Job aborted.
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
  at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
  at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
  at org.apache.spark.sql.DataFrameWriter.json(DataFrameWriter.scala:528)
  ... 54 elided
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 98 in stage 11.0 failed 4 times, most recent failure: Lost task 98.3 in stage 11.0 (TID 3895, etl-w-2.us-east1-b.c.team-etl-234919.internal, executor 294): ExecutorLostFailure (executor 294 exited caused by one of the running tasks) Reason: Container marked as failed: container_1554684028327_0001_01_000307 on host: etl-w-2.us-east1-b.c.team-etl-234919.internal. Exit status: 143. Diagnostics: [2019-04-08 01:50:14.153]Container killed on request. Exit code is 143
[2019-04-08 01:50:14.153]Container exited with a non-zero exit code 143.
[2019-04-08 01:50:14.154]Killed by external signal

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:194)
  ... 74 more

知道为什么吗?

【问题讨论】:

  • 如果您将文件写入 HDFS 而不是 GCS,您的工作会成功吗?
  • 你原来的分区号是多少?你应该使用合并而不是重新分区。
  • @IgorDvorzhak 我的工作在将文件写入 MySQL 失败后失败了。
  • 我要问的是,如果你修改这个作业来写入 HDFS,它会失败吗?或者这个作业即使写入 GCS 也不会失败,而其他作业在写入 MySQL 时也会失败?

标签: scala apache-spark google-cloud-platform google-cloud-storage google-cloud-dataproc


【解决方案1】:

如果 Spark 工作程序运行在较小的数据集上而不是较大的数据集上,您很可能会遇到内存不足的限制。每个工作人员的内存问题将更多地取决于您的分区和每个执行程序的设置,而不是集群范围内可用的总内存(因此创建更大的集群无助于此类问题)。

您可以尝试以下任意组合:

  1. 重新分区为更多的分区而不是 10 个输出
  2. 使用highmem 而不是standard 机器创建集群
  3. 使用更改内存与 CPU 比率的 spark 内存设置创建集群:gcloud dataproc clusters create --properties spark:spark.executor.cores=1 例如,将更改每个执行程序一次只运行一个具有相同内存量的任务,而 Dataproc 通常每台计算机运行 2 个执行程序并相应地划分 CPU。在 4 核机器上,通常有 2 个执行器,每个执行器允许 2 个核心。然后,此设置将只为这 2 个执行程序中的每一个提供 1 个核心,同时仍使用一半机器的内存。

【讨论】:

    猜你喜欢
    • 2022-08-03
    • 1970-01-01
    • 2020-11-07
    • 2019-12-25
    • 2018-03-18
    • 2022-11-06
    • 2023-03-20
    • 2015-01-09
    • 2020-10-04
    相关资源
    最近更新 更多