【问题标题】:Pyspark Expand One Row into Multiple Rows By Column HeaderPyspark 按列标题将一行扩展为多行
【发布时间】:2018-11-02 22:04:53
【问题描述】:

假设我有一个包含以下列的数据框:

# id  | name  | 01-Jan-10 | 01-Feb-10 | ... | 01-Jan-11 | 01-Feb-11
# -----------------------------------------------------------------
# 1   | a001  |     0     |    32     | ... |     14    |    108
# 1   | a002  |    80     |     0     | ... |      0    |     92

我想把它展开成这样的表格:

# id  | name  | Jan | Feb | ... | Year
# -----------------------------------
# 1   | a001  |   0 |  32 | ... | 2010
# 1   | a001  |  14 | 108 | ... | 2011
# 1   | a002  |  80 |   0 | ... | 2010
# 1   | a002  |   0 |  92 | ... | 2011

我想按年份将日期分成几行并捕获每个月的值。

在 pyspark (python + spark) 中如何实现?我一直在尝试收集 df 数据以迭代并提取每个字段以写入每一行,但我想知道是否有更聪明的 spark 函数可以帮助解决这个问题。 (新火花)

【问题讨论】:

    标签: python apache-spark pyspark


    【解决方案1】:

    第一个meltDataFrameHow to melt Spark DataFrame?):

    df = spark.createDataFrame(
        [(1, "a001", 0, 32, 14, 108), (2, "a02", 80, 0, 0, 92)],
        ("id", "name", "01-Jan-10", "01-Feb-10", "01-Jan-11", "01-Feb-11")
    )
    
    df_long = melt(df, df.columns[:2], df.columns[2:])
    
    # +---+----+---------+-----+
    # | id|name| variable|value|
    # +---+----+---------+-----+
    # |  1|a001|01-Jan-10|    0|
    # |  1|a001|01-Feb-10|   32|
    # |  1|a001|01-Jan-11|   14|
    # |  1|a001|01-Feb-11|  108|
    # |  2| a02|01-Jan-10|   80|
    # |  2| a02|01-Feb-10|    0|
    # |  2| a02|01-Jan-11|    0|
    # |  2| a02|01-Feb-11|   92|
    # +---+----+---------+-----+
    

    下一次解析日期并提取年月:

    from pyspark.sql.functions import to_date, date_format, year
    
    date = to_date("variable", "dd-MMM-yy")
    
    parsed = df_long.select(
        "id", "name", "value", 
        year(date).alias("year"), date_format(date, "MMM").alias("month")
    )
    
    # +---+----+-----+----+-----+
    # | id|name|value|year|month|
    # +---+----+-----+----+-----+
    # |  1|a001|    0|2010|  Jan|
    # |  1|a001|   32|2010|  Feb|
    # |  1|a001|   14|2011|  Jan|
    # |  1|a001|  108|2011|  Feb|
    # |  2| a02|   80|2010|  Jan|
    # |  2| a02|    0|2010|  Feb|
    # |  2| a02|    0|2011|  Jan|
    # |  2| a02|   92|2011|  Feb|
    # +---+----+-----+----+-----+
    

    终于pivotHow to pivot Spark DataFrame?):

    # Providing a list of levels is not required but will make the process faster
    # months = [
    #     "Jan", "Feb", "Mar", "Apr", "May", "Jun", 
    #     "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
    # ]
    
    months = ["Jan", "Feb"]
    
    parsed.groupBy("id", "name", "year").pivot("month", months).sum("value")
    
    # +---+----+----+---+---+       
    # | id|name|year|Feb|Jan|
    # +---+----+----+---+---+
    # |  2| a02|2011| 92|  0|
    # |  1|a001|2010| 32|  0|
    # |  1|a001|2011|108| 14|
    # |  2| a02|2010|  0| 80|
    # +---+----+----+---+---+
    

    【讨论】:

    • 太完美了,谢谢!我在末尾添加了 .orderBy("id", "year") 并让它看起来完全符合我的需要。
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多