【发布时间】:2020-04-05 13:23:25
【问题描述】:
我正在尝试创建从任何 json 字符串到数据帧的数据帧。 json 字符串通常很深并且嵌套了一些时间。 json字符串是这样的:
val json_string = """{
"Total Value": 3,
"Topic": "Example",
"values": [
{
"value1": "#example1",
"points": [
[
"123",
"156"
]
],
"properties": {
"date": "12-04-19",
"model": "Model example 1"
}
},
{"value2": "#example2",
"points": [
[
"124",
"157"
]
],
"properties": {
"date": "12-05-19",
"model": "Model example 2"
}
}
]
}"""
我期望的输出是:
+-----------+-----------+----------+------------------+------------------+------------------------+-----------------------------+
|Total Value| Topic |values 1 | values.points[0] | values.points[1] | values.properties.date | values.properties.model |
+-----------+-----------+----------+------------------+------------------+------------------------+-----------------------------+
| 3 | Example | example1 | 123 | 156 | 12-04-19 | Model Example 1 |
| 3 | Example | example2 | 124 | 157 | 12-05-19 | Model example 2
+-----------+-----------+----------+------------------+------------------+------------------------+-----------------------------+
我正在做扁平化,但在 json 中选择了一些键来获取模式然后扁平化,但我不想以这种方式扁平化。它应该独立于任何要给出的键并相应地展平,如上面的输出所示。 即使在这种情况下给出了值的键之后,由于点是数组,所以我仍然得到 2 列相同的记录,因此点 [0] 为一列,而点 [1] 为不同的列。我的 Scala 火花代码是:
val key = "values" //Ideally this should not be given in my case.
val jsonFullDFSchemaString = spark.read.json(json_location).select(col(key)).schema.json; // changing values to reportData
val jsonFullDFSchemaStructType = DataType.fromJson(jsonFullDFSchemaString).asInstanceOf[StructType]
val df = spark.read.schema(jsonFullDFSchemaStructType).json(json_location);
现在我正在使用扁平化:
def flattenDataframe(df: DataFrame): DataFrame = {
//getting all the fields from schema
val fields = df.schema.fields
val fieldNames = fields.map(x => x.name)
//length shows the number of fields inside dataframe
val length = fields.length
for (i <- 0 to fields.length - 1) {
val field = fields(i)
val fieldtype = field.dataType
val fieldName = field.name
fieldtype match {
case arrayType: ArrayType =>
val fieldName1 = fieldName
val fieldNamesExcludingArray = fieldNames.filter(_ != fieldName1)
val fieldNamesAndExplode = fieldNamesExcludingArray ++ Array(s"explode_outer($fieldName1) as $fieldName1")
//val fieldNamesToSelect = (fieldNamesExcludingArray ++ Array(s"$fieldName1.*"))
val explodedDf = df.selectExpr(fieldNamesAndExplode: _*)
return flattenDataframe(explodedDf)
case structType: StructType =>
val childFieldnames = structType.fieldNames.map(childname => fieldName + "." + childname)
val newfieldNames = fieldNames.filter(_ != fieldName) ++ childFieldnames
val renamedcols = newfieldNames.map(x => (col(x.toString()).as(x.toString().replace(".", "_").replace("$", "_").replace("__", "_").replace(" ", "").replace("-", ""))))
val explodedf = df.select(renamedcols: _*)
return flattenDataframe(explodedf)
case _ =>
}
}
df
}
现在终于从 json 获得扁平化数据帧:
val tableSchemaDF = flattenDataframe(df)
println(tableSchemaDF)
因此,理想情况下,任何 json 文件都应该像我上面显示的那样相应地变平,而不提供任何根键并且不创建 2 行。希望我已经提供了足够的细节。任何帮助将不胜感激。谢谢。
请注意:Json 数据来自 API,因此不确定根键“值”是否存在。这就是为什么我不打算为展平提供密钥。
【问题讨论】:
-
您是否验证了您的 JSON?我认为它的格式不正确。
-
感谢 @baithmbarek 纠正我的 json 字符串。
-
我认为这会帮助@Mahesh
标签: json scala apache-spark apache-spark-sql flatten