【问题标题】:Spark: pyspark crash for some datasets - ubuntuSpark:某些数据集的 pyspark 崩溃 - ubuntu
【发布时间】:2016-12-01 23:38:07
【问题描述】:

我正在使用 Ubuntu 和 本地 Spark 安装 (spark-2.0.2)。 我的数据集很小,我的代码运行在我有一个小数据。 如果我将数据集(txt 文件)增加几行,则会发生错误。

我在安装了 Hadoop 的 Cloudera VM 上尝试了完全相同的代码,并且运行良好。

所以,这一定是我的 Ubuntu 机器上的一些内存问题或限制。

还有一些其他类似的问题,例如:Apache Spark: pyspark crash for large dataset

但在我的情况下它没有帮助。 我没有 Hadoop 集群,只有 Spark、python 2.7 和 java 1.8。 它工作正常,只是当有一些更复杂的计算或数据集更大时它会崩溃。

有什么线索吗?

错误:

spark-submit myCalc.py

ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 175, in main
    process()
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 1792, in combineLocally
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/shuffle.py", line 238, in mergeValues
    d[k] = comb(d[k], v) if k in d else creator(v)
  File "/home/alg/Documents//Spark/code/customer_orders/myCalc.py", line 24, in <lambda>
    reduced_total = RDD_map.reduceByKey(lambda x,y: (x[1]+y[1]))
TypeError: 'float' object has no attribute '__getitem__'

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:390)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
16/12/01 23:25:51 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "/home/alg/Documents//Spark/code/customer_orders/myCalc.py", line 28, in <module>
    results = reduced_total.collect()
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 776, in collect
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 175, in main
    process()
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 1792, in combineLocally
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/shuffle.py", line 238, in mergeValues
    d[k] = comb(d[k], v) if k in d else creator(v)
  File "/home/alg/Documents//Spark/code/customer_orders/myCalc.py", line 24, in <lambda>
    reduced_total = RDD_map.reduceByKey(lambda x,y: (x[1]+y[1]))
TypeError: 'float' object has no attribute '__getitem__'

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:390)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1899)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1913)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:912)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:911)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 175, in main
    process()
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 1792, in combineLocally
  File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/shuffle.py", line 238, in mergeValues
    d[k] = comb(d[k], v) if k in d else creator(v)
  File "/home/alg/Documents//Spark/code/customer_orders/myCalc.py", line 24, in <lambda>
    reduced_total = RDD_map.reduceByKey(lambda x,y: (x[1]+y[1]))
TypeError: 'float' object has no attribute '__getitem__'

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:390)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

【问题讨论】:

    标签: python ubuntu hadoop apache-spark pyspark


    【解决方案1】:

    所以,这一定是我的(...)机器上的一些内存问题或限制。

    事实并非如此。虽然您没有在正常条件下提供可重现的示例(使用合理的 __add__ 和 `getitem' 实现),但以下功能:

    lambda x, y: x[1] + y[1]
    

    不是reduceByKey 的有效选择。您传递给reduceByKey 的函数必须是associative and commutative。显然,它应该采用与返回类型相同类型的参数。

    使用 Python 3.5+ 注释:

    from typing import TypeVar
    
    T = TypeVar('T')
    
    def (t1: T, t2: T) -> T:
        return ...
    

    为什么你使用的函数并不总是失败?因为它的行为取决于数据分布。假设您有形状为 (string, (string, float)) 的元组:

    will_succeed = sc.parallelize([
      ("a", ("foo", 1.0)), ("a", ("bar", 1.0)),
      ("b", ("foo", 1.0)), ("b", ("bar", 1.0))
    ], 2)
    
    will_succeed.reduceByKey(lambda x, y: x[1] + y[1]).collect()
    
    [('b', 2.0), ('a', 2.0)]
    

    对比:

    will_fail = sc.parallelize([
      ("a", ("foo", 1.0)), ("a", ("bar", 1.0)), ("a", ("baz", 1.0)),
      ("b", ("foo", 1.0)), ("b", ("bar", 1.0))
    ], 2)
    
    will_fail.reduceByKey(lambda x, y: x[1] + y[1]).collect()
    
    TypeError: 'float' object is not subscriptable
    ...
    

    在第一种情况下,键 a 的执行顺序是:

    f(("foo", 1.0), ("bar", 1.0))
    2.0
    

    f 你的函数在哪里。在第二种情况下,它将等效于(不一定按此顺序):

    f(f(("foo", 1.0),  ("bar", 1.0)), ("baz", 1.0))
    f(2.0, ("baz", 1.0))
    exception!
    

    正确的解决方案可能是:

    from operator import itemgetter, add
    
    # will fail no more
    will_fail.mapValues(itemgetter(1)).reduceByKey(add) 
    

    也可以使用combineByKeyaggregateByKey之一:

    will_fail.combineByKey(itemgetter(1), lambda x, y: x + y[1], add)
    

    【讨论】:

      猜你喜欢
      • 2015-01-13
      • 1970-01-01
      • 1970-01-01
      • 2022-06-11
      • 1970-01-01
      • 2012-11-27
      • 2021-10-10
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多