【发布时间】:2020-10-20 22:45:37
【问题描述】:
我有一个由 12 行和不同列组成的 spark 数据框,在本例中为 22 个。
我想把它转换成以下格式的数据框:
root
|-- data: array (nullable = false)
| |-- element: struct (containsNull = false)
| | |-- ast: double (nullable = true)
| | |-- blk: double (nullable = true)
| | |-- dreb: double (nullable = true)
| | |-- fg3_pct: double (nullable = true)
| | |-- fg3a: double (nullable = true)
| | |-- fg3m: double (nullable = true)
| | |-- fg_pct: double (nullable = true)
| | |-- fga: double (nullable = true)
| | |-- fgm: double (nullable = true)
| | |-- ft_pct: double (nullable = true)
| | |-- fta: double (nullable = true)
| | |-- ftm: double (nullable = true)
| | |-- games_played: long (nullable = true)
| | |-- seconds: double (nullable = true)
| | |-- oreb: double (nullable = true)
| | |-- pf: double (nullable = true)
| | |-- player_id: long (nullable = true)
| | |-- pts: double (nullable = true)
| | |-- reb: double (nullable = true)
| | |-- season: long (nullable = true)
| | |-- stl: double (nullable = true)
| | |-- turnover: double (nullable = true)
其中数据帧data字段的每个元素对应于原始数据帧的不同行。
最终目标是将其导出为 .json 文件,该文件的格式为:
{"data": [{row1}, {row2}, ..., {row12}]}
我现在使用的代码如下:
val best_12_struct = best_12.withColumn("data", array((0 to 11).map(i => struct(col("ast"), col("blk"), col("dreb"), col("fg3_pct"), col("fg3a"),
col("fg3m"), col("fg_pct"), col("fga"), col("fgm"),
col("ft_pct"), col("fta"), col("ftm"), col("games_played"),
col("seconds"), col("oreb"), col("pf"), col("player_id"),
col("pts"), col("reb"), col("season"), col("stl"), col("turnover"))) : _*))
val best_12_data = best_12_struct.select("data")
但是array(0 to 11) 将同一元素复制了 12 次到data。因此,我最终获得的.json 有12 个{"data": ...},在同一行中被复制了12 次,而不是只有一个{"data": ...} 有12 个元素,每个元素对应于原始数据帧的一行。
【问题讨论】:
-
你能在 json 中添加示例数据和预期输出吗??
标签: scala apache-spark apache-spark-sql