【问题标题】:Extract all keys from Json object from Hadoop Table using Python Spark使用 Python Spark 从 Hadoop 表中提取 Json 对象的所有键
【发布时间】:2020-01-30 16:29:14
【问题描述】:

我有一个名为 table_with_json_string 的 Hadoop 表

例如:

+-----------------------------------+---------------------------------+
|      creation_date                |        json_string_colum        |
+-----------------------------------+---------------------------------+
| 2020-01-29                        |  "{keys : {1 : 'a', 2 : 'b' }}" |
+-----------------------------------+---------------------------------+

想要的输出:

+-----------------------------------+----------------------------------+----------+
|      creation_date                |         json_string_colum        |   keys   |
+-----------------------------------+----------------------------------+----------+
| 2020-01-29                        |  "{keys : {1 : 'a', 2 : 'b' }}"  |    1     |
| 2020-01-29                        |  "{keys : {1 : 'a', 2 : 'b' }}"  |    2     |
+-----------------------------------+----------------------------------+----------+

我尝试过:

from pyspark.sql import functions as sf
from pyspark.sql import types as st

from pyspark.sql.functions import from_json, col,explode
from pyspark.sql.types import StructType, StructField, StringType,MapType

schema = StructType([StructField("keys",
                    MapType(StringType(),StringType()),True)])
df = spark.table('table_with_json_string').select(col("creation_date"),col("json_string_colum"))
df = df.withColumn("map_json_column", from_json("json_string_colum",schema))
df.show(1,False)
+--------------------+-------------------------------------+----------------------------------+
|       creation_date|        json_string_colum            |    map_json_column               |
+--------------------+-------------------------------------+----------------------------------+
|   2020-01-29       |     "{keys : {1 : 'a', 2 : 'b' }}"  |    [Map(1 ->'a',2 ->'b')]        |
+--------------------+-------------------------------------+----------------------------------+

1 - 我如何从这个MapType 对象中提取密钥?我明白我需要使用 explode 函数来达到我想要的表格格式,但我仍然不知道如何将 JSON 对象的键提取为数组格式。

如果更容易实现我的目标,我愿意接受其他方法。

【问题讨论】:

    标签: python hadoop pyspark apache-spark-sql pyspark-sql


    【解决方案1】:

    在您目前所做的基础上,您可以获得以下密钥:

    from pyspark.sql import functions as f
    df = (df
     .withColumn("map_json_column", f.from_json("json_string_colum",schema))
     .withColumn("keys", f.map_keys("map_json_column.keys"))
     .drop("map_json_column")
     .withColumn("keys", f.explode("keys"))
     )
    

    结果:

    +-------------+--------------------+----+
    |creation_date|   json_string_colum|keys|
    +-------------+--------------------+----+
    |   2020-01-29|{"keys" : {"1" : ...|   1|
    |   2020-01-29|{"keys" : {"1" : ...|   2|
    +-------------+--------------------+----+
    

    以下是获得上述答案的详细步骤:

    >>> from pyspark.sql import functions as f
    >>> df.show()
    +-------------+--------------------+
    |creation_date|   json_string_colum|
    +-------------+--------------------+
    |   2020-01-29|{"keys" : {"1" : ...|
    +-------------+--------------------+
    
    >>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).show()
    +-------------+--------------------+------------------+
    |creation_date|   json_string_colum|   map_json_column|
    +-------------+--------------------+------------------+
    |   2020-01-29|{"keys" : {"1" : ...|[[1 -> a, 2 -> b]]|
    +-------------+--------------------+------------------+
    
    >>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).show()
    +-------------+--------------------+------------------+------+
    |creation_date|   json_string_colum|   map_json_column|  keys|
    +-------------+--------------------+------------------+------+
    |   2020-01-29|{"keys" : {"1" : ...|[[1 -> a, 2 -> b]]|[1, 2]|
    +-------------+--------------------+------------------+------+
    
    >>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).drop("map_json_column").show()
    +-------------+--------------------+------+
    |creation_date|   json_string_colum|  keys|
    +-------------+--------------------+------+
    |   2020-01-29|{"keys" : {"1" : ...|[1, 2]|
    +-------------+--------------------+------+
    
    >>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).drop("map_json_column").withColumn("keys", f.explode("keys")).show()
    +-------------+--------------------+----+
    |creation_date|   json_string_colum|keys|
    +-------------+--------------------+----+
    |   2020-01-29|{"keys" : {"1" : ...|   1|
    |   2020-01-29|{"keys" : {"1" : ...|   2|
    +-------------+--------------------+----+
    

    需要明确的是,我上面使用的函数 map_keys 在 PySpark 2.3+ 中可用

    【讨论】:

    猜你喜欢
    • 2016-04-25
    • 1970-01-01
    • 2018-07-13
    • 1970-01-01
    • 2020-06-02
    • 1970-01-01
    • 1970-01-01
    • 2021-09-24
    • 1970-01-01
    相关资源
    最近更新 更多