【发布时间】:2018-09-22 16:36:03
【问题描述】:
我正在使用 spark 2.2 和 我正在尝试从 pyspark 中的 tsv 文件中读取数据集,如下所示:
student_id subjects result
"1001" "[physics, chemistry]" "pass"
"1001" "[biology, math]" "fail"
"1002" "[economics]" "pass"
"1002" "[physics, chemistry]" "fail"
我想要以下结果:
student_id subject result
"1001" "physics" "pass"
"1001" "chemistry" "pass"
"1001" "biology" "fail"
"1001" "math" "fail"
"1002" "economics" "pass"
"1002" "physics" "fail"
"1002" "chemistry" "fail"
我做了以下,但它似乎不起作用
df = spark.read.format("csv").option("header", "true").option("mode", "FAILFAST") \
.option("inferSchema", "true").option("sep", ' ').load("ds3.tsv")
df.printSchema()
我在执行“printSchema”时看到以下结果
root
|-- student_id: integer (nullable = true)
|-- subjects: string (nullable = true)
|-- result: string (nullable = true)
当我执行以下操作时,即使用爆炸功能:
df.withColumn("subject", explode(col("subjects"))).select("student_id", "subject", "result").show(2)
我得到以下异常:
AnalysisException: "cannot resolve 'explode(`subjects`)' due to data type mismatch: input to function explode should be array or map type, not string;;\n'Project [student_id#10, subjects#11, results#12, explode(subjects#11) AS subject#30]\n+- AnalysisBarrier\n +- Relation[student_id#10,subjects#11,result#12] csv\n"
我在某处读到 pyspark 不支持字符串的 ArrayType。
编写一个从“subjects”列值的两端修剪“[]”字符,然后使用“split”函数并使用“explode”的 UDF 是否是个好主意?
【问题讨论】:
-
很明显,您只有一个主题字符串。
标签: apache-spark dataframe pyspark user-defined-functions explode