【发布时间】:2021-11-19 16:19:36
【问题描述】:
问题: 鉴于下面的 pyspark 数据框,是否可以通过使用窗口函数(参见下面的示例)逐行检查“some_value”是否确实增加(与上一行相比) ?
首选没有延迟的解决方案,因为我将有多个列,例如“some_value”,而且我事先不知道有多少列及其明确的名称。
示例:在这里,我想获得像“FLAG_INCREASE”这样的列。
+---+----------+---+----------+
| id| datum|lfd|some_value| FLAG_INCREASE
+---+----------+---+----------+ ------------+
| 1|2015-01-01| 4| 20.0| 0
| 1|2015-01-06| 3| 10.0| 0
| 1|2015-01-07| 2| 25.0| 1
| 1|2015-01-12| 1| 30.0| 1
| 2|2015-01-01| 4| 5.0| 0
| 2|2015-01-06| 3| 30.0| 1
| 2|2015-01-12| 1| 20.0| 0
+---+----------+---+----------+--------------+
代码:
import pyspark.sql.functions as F
from pyspark.sql.window import Window
from pyspark.sql import Row
row = Row("id", "datum", "lfd", "some_value", "some_value2")
df = spark.sparkContext.parallelize([
row(1, "2015-01-01", 4, 20.0, 20.0),
row(1, "2015-01-06", 3, 10.0, 20.0),
row(1, "2015-01-07", 2, 25.0, 20.0),
row(1, "2015-01-12", 1, 30.0, 20.0),
row(2, "2015-01-01", 4, 5.0, 20.0),
row(2, "2015-01-06", 3, 30.0, 20.0),
row(2, "2015-01-12", 1, 20.0, 20.0)
]).toDF().withColumn("datum", F.col("datum").cast("date"))
+---+----------+---+----------+
| id| datum|lfd|some_value|
+---+----------+---+----------+
| 1|2015-01-01| 4| 20.0|
| 1|2015-01-06| 3| 10.0|
| 1|2015-01-07| 2| 25.0|
| 1|2015-01-12| 1| 30.0|
| 2|2015-01-01| 4| 5.0|
| 2|2015-01-06| 3| 30.0|
| 2|2015-01-12| 1| 20.0|
+---+----------+---+----------+
【问题讨论】:
标签: python apache-spark pyspark data-processing