【问题标题】:RuntimeError: Unsupported type in conversion to Arrow: VectorUDTRuntimeError:不支持的类型转换为箭头:VectorUDT
【发布时间】:2018-12-13 00:04:18
【问题描述】:

我想将一个大的 spark 数据框转换为超过 1000000 行的 Pandas。我尝试使用以下代码将 spark 数据帧转换为 Pandas 数据帧:

spark.conf.set("spark.sql.execution.arrow.enabled", "true")
result.toPandas()

但是,我得到了错误:

TypeError                                 Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py in toPandas(self)
   1949                 import pyarrow
-> 1950                 to_arrow_schema(self.schema)
   1951                 tables = self._collectAsArrow()

/usr/local/lib/python3.6/dist-packages/pyspark/sql/types.py in to_arrow_schema(schema)
   1650     fields = [pa.field(field.name, to_arrow_type(field.dataType), nullable=field.nullable)
-> 1651               for field in schema]
   1652     return pa.schema(fields)

/usr/local/lib/python3.6/dist-packages/pyspark/sql/types.py in <listcomp>(.0)
   1650     fields = [pa.field(field.name, to_arrow_type(field.dataType), nullable=field.nullable)
-> 1651               for field in schema]
   1652     return pa.schema(fields)

/usr/local/lib/python3.6/dist-packages/pyspark/sql/types.py in to_arrow_type(dt)
   1641     else:
-> 1642         raise TypeError("Unsupported type in conversion to Arrow: " + str(dt))
   1643     return arrow_type

TypeError: Unsupported type in conversion to Arrow: VectorUDT

During handling of the above exception, another exception occurred:

RuntimeError                              Traceback (most recent call last)
<ipython-input-138-4e12457ff4d5> in <module>()
      1 spark.conf.set("spark.sql.execution.arrow.enabled", "true")
----> 2 result.toPandas()

/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py in toPandas(self)
   1962                     "'spark.sql.execution.arrow.enabled' is set to true. Please set it to false "
   1963                     "to disable this.")
-> 1964                 raise RuntimeError("%s\n%s" % (_exception_message(e), msg))
   1965         else:
   1966             pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)

RuntimeError: Unsupported type in conversion to Arrow: VectorUDT
Note: toPandas attempted Arrow optimization because 'spark.sql.execution.arrow.enabled' is set to true. Please set it to false to disable this.

它不起作用,但如果我将箭头设置为 false,它就会起作用。但它太慢了......有什么想法吗?

【问题讨论】:

  • 请分享 1) 您的数据样本。 2) df.printSchema()的输出

标签: pandas apache-spark dataframe pyspark pyarrow


【解决方案1】:

Arrow 仅支持一小部分类型,Spark UserDefinedTypes,包括 mlmllib VectorUDTs 不在支持的范围内。

如果您想使用箭头,则必须将数据转换为受支持的格式。一种可能的解决方案是将 Vectors 扩展为列 - How to split Vector into columns - using PySpark

您还可以使用to_json 方法序列化输出:

from pyspark.sql.functions import to_json

 df.withColumn("your_vector_column", to_json("your_vector_column"))

但如果数据大到足以让toPandas 成为严重的瓶颈,那么我会重新考虑收集这样的数据。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2021-12-07
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2022-08-22
    • 1970-01-01
    • 1970-01-01
    • 2013-11-07
    相关资源
    最近更新 更多