【问题标题】:Read one column as json strings and another as regular using pyspark dataframe使用 pyspark 数据框将一列作为 json 字符串读取,另一列作为常规读取
【发布时间】:2019-06-19 22:18:22
【问题描述】:

我有一个这样的数据框:

col1    | col2        |
-----------------------
test:1  | {"test1:subtest1":[{"Id":"17","cName":"c1"}], "test1:subtest2":[{"Id":"01","cName":"c2"}]}
test:2  | {"test1:subtest2":[{"Id":"18","cName":"c13","pScore":0.00203}]}

我想要这样的输出:

col1   | col2           | Id | cName | pScore  |
------------------------------------------------
test:1 | test1:subtest1 | 17 | c1    | null    | 
test:1 | test1:subtest2 | 01 | c2    | null    | 
test:2 | test1:subtest2 | 18 | c13   | 0.00203 | 

这是对这个问题的跟进 - Casting a column to JSON/dict and flattening JSON values in a column in pyspark

我是 pyspark 的新手,希望能对此提供任何帮助。我尝试了该帖子中给出的解决方案。它一直给我错误

TypeError: type object argument after ** must be a mapping, not list

我还尝试了以下方法:

test = sqlContext.read.json(df.rdd.map(lambda r: r.col2))

但这给了我如下输出:

 test1:subtest1      | test1:subtest2        |
----------------------------------------------
[{"Id":"17","cName":"c1"}] | [{"Id":"01","cName":"c2"}]
null                       | [{"Id":"18","cName":"c13","pScore":0.00203}]

我不知道如何使用上面的 ^ 加入 col1 并获得所需的输出。

非常感谢任何帮助,在此先感谢!

【问题讨论】:

    标签: python pyspark pyspark-sql


    【解决方案1】:

    你可以使用from_json()函数,关键是定义json_schema,你可以手动创建或者如果你使用pyspark 2.4+,你可以使用函数schema_of_json()(以下代码在 pyspark 2.4.0 下测试):

    from pyspark.sql import functions as F
    
    # define all keys with a list:
    my_keys = ['test1:subtest1', 'test1:subtest2']
    
    # find a sample json_code for a single key with all sub-fields and then construct its json_schema
    key_schema = df.select(F.schema_of_json('{"test1:subtest1":[{"Id":"17","cName":"c1","pScore":0.00203}]}').alias('schema')).first().schema
    
    >>> key_schema
    u'struct<test1:subtest1:array<struct<Id:string,cName:string,pScore:double>>>'
    
    # use the above sample key_schema to create the json_schema for all keys
    schema = u'struct<' + ','.join([r'`{}`:array<struct<Id:string,cName:string,pScore:double>>'.format(k) for k in my_keys]) + r'>'
    
    >>> schema 
    u'struct<`test1:subtest1`:array<struct<Id:string,cName:string,pScore:double>>,`test1:subtest2`:array<struct<Id:string,cName:string,pScore:double>>>'
    

    注意:当字段名包含特殊字符(如 :)时,需要用反引号括起来。

    我们有了schema之后,就可以从col2获取json数据了:

    df1 = df.withColumn('data', F.from_json('col2', schema)).select('col1', 'data.*')
    
    >>> df1.printSchema()
    root
     |-- col1: string (nullable = true)
     |-- test1:subtest1: array (nullable = true)
     |    |-- element: struct (containsNull = true)
     |    |    |-- Id: string (nullable = true)
     |    |    |-- cName: string (nullable = true)
     |    |    |-- pScore: double (nullable = true)
     |-- test1:subtest2: array (nullable = true)
     |    |-- element: struct (containsNull = true)
     |    |    |-- Id: string (nullable = true)
     |    |    |-- cName: string (nullable = true)
     |    |    |-- pScore: double (nullable = true)
    
    >>> df1.show(2,0)
    +------+--------------+--------------------+
    |col1  |test1:subtest1|test1:subtest2      |
    +------+--------------+--------------------+
    |test:1|[[17, c1,]]   |[[01, c2,]]         |
    |test:2|null          |[[18, c13, 0.00203]]|
    +------+--------------+--------------------+
    

    然后你可以使用 select 和 union 来规范化数据框:

    df_new = df1.select('col1', F.lit('test1:subtest1').alias('col2'), F.explode(F.col('test1:subtest1')).alias('arr')) \
                .union(
                    df1.select('col1', F.lit('test1:subtest2'), F.explode(F.col('test1:subtest2')))
               ).select('col1', 'col2', 'arr.*')  
    
    >>> df_new.show()
    +------+--------------+---+-----+-------+
    |  col1|          col2| Id|cName| pScore|
    +------+--------------+---+-----+-------+
    |test:1|test1:subtest1| 17|   c1|   null|
    |test:1|test1:subtest2| 01|   c2|   null|
    |test:2|test1:subtest2| 18|  c13|0.00203|
    +------+--------------+---+-----+-------+
    

    使用 reduce()

    当json字符串中有多个唯一键时,使用reduce函数创建df_new

    from functools import reduce     
    
    df_new = reduce(lambda x,y: x.union(y)
              , [ df1.select('col1', F.lit(k).alias('col2'), F.explode(F.col(k)).alias('arr')) for k in my_keys ]
             ).select('col1', 'col2', 'arr.*')
    

    【讨论】:

    • 漂亮!非常感谢!
    猜你喜欢
    • 2021-06-29
    • 1970-01-01
    • 1970-01-01
    • 2015-12-04
    • 2018-09-15
    • 2013-07-26
    • 2017-10-26
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多