您实际上有两种选择,一种是声明一个新架构并嵌套您的pyspark.sql.types.StructField,或者您使用pyspark.sql.functions.struct,如下所示:
import pyspark.sql.functions as f
df = spark._sc.parallelize([
[0, 1.0, 0.71, 0.143],
[1, 0.0, 0.97, 0.943],
[0, 0.123, 0.27, 0.443],
[1, 0.67, 0.3457, 0.243],
[1, 0.39, 0.7777, 0.143]
]).toDF(['col1', 'col2', 'col3', 'col4'])
df_new = df.withColumn(
'tada',
f.struct(*[f.col('col2').alias('subcol_1'), f.col('col3').alias('subcol_2')])
)
df_new.show()
+----+-----+------+-----+--------------+
|col1| col2| col3| col4| tada|
+----+-----+------+-----+--------------+
| 0| 1.0| 0.71|0.143| [1.0, 0.71]|
| 1| 0.0| 0.97|0.943| [0.0, 0.97]|
| 0|0.123| 0.27|0.443| [0.123, 0.27]|
| 1| 0.67|0.3457|0.243|[0.67, 0.3457]|
| 1| 0.39|0.7777|0.143|[0.39, 0.7777]|
+----+-----+------+-----+--------------+
现在,鉴于tada 是StructType,您可以使用[...] 表示法访问它,如下所示:
df_new.select(f.col('tada')['subcol_1']).show()
+-------------+
|tada.subcol_1|
+-------------+
| 1.0|
| 0.0|
| 0.123|
| 0.67|
| 0.39|
+-------------+
打印模式也总结了:
df_new.printSchema()
root
|-- col1: long (nullable = true)
|-- col2: double (nullable = true)
|-- col3: double (nullable = true)
|-- col4: double (nullable = true)
|-- tada: struct (nullable = false)
| |-- subcol_1: double (nullable = true)
| |-- subcol_2: double (nullable = true)
NB1:您可以使用任何其他返回pyspark.sql.functions.Column 的函数(例如f.lit())来代替f.col(...) 获取现有列。
NB2:使用f.col(...) 时,可以看到现有的列类型将被结转。
希望这会有所帮助!