【问题标题】:Filter column with two different schemas in spark scala在 spark scala 中过滤具有两种不同模式的列
【发布时间】:2019-09-20 16:18:26
【问题描述】:

我有三列数据框; ID、CO_ID 和 DATA,其中 DATA 列有以下两种不同的模式:

|ID  |CO_ID |Data
|130 |NA    | [{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}]
|536 |NA    | [{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}]   
|518 |NA    | null
|938 |611   | {"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}                                                                                                                           
|742 |NA    | {"NUMBER":"AW9F","ADDRESS":"PLOT NO. 230, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}

现在我想创建一个包含 ID、CO_ID、NUMBER、ADDRESS 和 NAME 列的数据框。如果没有值,则在NUMBER、ADDRESS 和NAME 中填写null。

首先我必须用不同的模式过滤上面的数据框,我该怎么做?

【问题讨论】:

  • 数据列的类型是什么?看起来有些行是数组,而有些则不是。
  • 是的,这只是问题,我有不同类型的数据,我该如何处理这种问题?
  • Spark 不允许使用不同的类型。做df.printSchema()的结果是什么?
  • 看到我有 csv 文件,其中有这样的数据,现在我必须创建包含上述列的表。有什么办法吗?
  • 架构显示如下根 |-- ID: string (nullable = true) |-- CO_ID: string (nullable = true) |-- DATA: string (nullable = true)

标签: apache-spark apache-spark-sql


【解决方案1】:

这是一种方法:

val df = Seq(
(130, "NA","""[{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 231, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}]"""),
(536, "NA","""[{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 232, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}}]"""),
(518,"NA", null),
(938, "611", """{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 233, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}"""),
(742, "NA", """{"NUMBER":"AW9F","ADDRESS":"PLOT NO. 234, JAIPUR RJ","PHONE":999999999,"NAME":"SACHIN"}"""))
.toDF("ID","CO_ID","Data")


import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.functions.{from_json, array, when, length, lit}

val schema = (new StructType)
   .add("NUMBER", "string", true)
   .add("ADDRESS", "string", true)
   .add("PHONE", "string", true)
   .add("NAME", "string", true)

val df_ar = df.withColumn("json", 
                       when($"data"
                         .startsWith("[{") && $"data".endsWith("}]"), $"data".substr(lit(2), length($"data") - 2))
                         .otherwise($"data")) //checks whether data start with '[{' and ends with '}]' if it does removes []
              .withColumn("json", from_json($"json", schema)) //covert to JSON based on given schema
              .withColumn("number", $"json.NUMBER")
              .withColumn("address", $"json.ADDRESS")
              .withColumn("name", $"json.NAME")

df_ar.select("ID", "CO_ID", "number", "address", "name").show(false)

此解决方案首先从 JSON 字符串中删除 [],然后应用给定架构将字符串 JSON 转换为 StructType 列。

输出:

+---+-----+------+-----------------------+------+
|ID |CO_ID|number|address                |name  |
+---+-----+------+-----------------------+------+
|130|NA   |AW9F  |PLOT NO. 231, JAIPUR RJ|SACHIN|
|536|NA   |AW9F  |PLOT NO. 232, JAIPUR RJ|SACHIN|
|518|NA   |null  |null                   |null  |
|938|611  |AW9F  |PLOT NO. 233, JAIPUR RJ|SACHIN|
|742|NA   |AW9F  |PLOT NO. 234, JAIPUR RJ|SACHIN|
+---+-----+------+-----------------------+------+

【讨论】:

  • 谢谢亚历克斯,但我可以不使用架构来完成吗?
  • 你好@Mohammad 我相信没有模式你必须直接处理字符串,这将非常复杂
  • 如果我不想使用模式,那么如何直接处理字符串,它会变得多么复杂?你能详细说明一下吗?
  • 例如,您可以通过使用正则表达式结合 UDF 提取值来处理每一行。尽管这肯定会慢得多并且会影响性​​能。那么,您没有问题中显示的特定架构的架构有什么问题?
  • 其实DATA列中可以有很多keys,有时5个keys或者4个keys或者更少或者更多。
猜你喜欢
  • 2022-08-11
  • 2020-08-27
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多