【发布时间】:2023-03-19 20:04:02
【问题描述】:
使用以下营销 JSON 文件
{
"request_id": "xx",
"timeseries_stats": [
{
"timeseries_stat": {
"id": "xx",
"timeseries": [
{
"start_time": "xx",
"end_time": "xx",
"stats": {
"impressions": xx,
"swipes": xx,
"view_completion": xx,
"spend": xx
}
},
{
"start_time": "xx",
"end_time": "xx",
"stats": {
"impressions": xx,
"swipes": xx,
"view_completion": xx,
"spend": xx
}
}
我可以很容易地使用 pandas 解析这个并获得所需格式的数据帧
start_time end_time impressions swipes view_completion spend
xx xx xx xx xx xx
xx xx xx xx xx xx
但需要在 AWS Glue 上使用 spark。
使用创建初始 spark 数据帧 (df) 后
rdd = sc.parallelize(JSON_resp['timeseries_stats'][0]['timeseries_stat']['timeseries'])
df = rdd.toDF()
我尝试按如下方式扩展 stats 键
df_expanded = df.select("start_time","end_time","stats.*")
错误:
AnalysisException: 'Can only star expand struct data types.
Attribute: `ArrayBuffer(stats)`;'
&
from pyspark.sql.functions import explode
df_expanded = df.select("start_time","end_time").withColumn("stats", explode(df.stats))
错误:
AnalysisException: 'The number of aliases supplied in the AS clause does not match the
number of columns output by the UDTF expected 2 aliases but got stats ;
spark 很新,对于这两种方法中的任何一种,任何帮助都将不胜感激!
这是一个非常相似的问题:
parse array of dictionaries from JSON with Spark
除了我需要展平这个额外的统计键。
【问题讨论】:
标签: json apache-spark pyspark apache-spark-sql aws-glue