【发布时间】:2021-09-05 15:18:03
【问题描述】:
我刚接触火花和学习。
我有这个 spark 数据框。我想按日期排序并获取按“ID1”、“ID2”和“record_type”分区的最新记录。
我的输入是这样的
data = [
("ACC.PXP", "7246", "2018-10-18T16:20:00", "Hospital", None, "IN"),
("ACC.PXP", "7246", "2018-10-18T16:20:00", None, "Foundation", "IN"),
("ACC.PXP", "7246", "2018-11-10T00:00:00", "Hospital", "Foundation", "IN"),
("ACC.PXP", "7246", "2018-11-11T00:00:00", None, "Washington", "OUT"),
("ACC.PXP", "7246", "2018-11-12T00:00:00", "Hospital", None, "OUT"),
("ACC.PXP", "7246", "2018-11-15T04:00:00", "Home", None, "IN"),
("ACC.PXP", "7246", "2018-11-15T04:00:00", "Home", None, "IN"),
("ACC.PXP", "7246", "2020-12-04T15:00:00", "Care", "Betel", "OUT"),
("ACC.PXP", "7246", "2020-13-04T15:00:00", "Care", None, "OUT"),
]
df = spark.createDataFrame(
data=data, schema=["ID1", "ID2", "date", "type", "name", "record_type"]
)
df.orderBy(F.col("date")).show(truncate=False)
+-------+----+-------------------+--------+----------+-----------+
|ID1 |ID2 |date |type |name |record_type|
+-------+----+-------------------+--------+----------+-----------+
|ACC.PXP|7246|2018-10-18T16:20:00|null |Foundation|IN |
|ACC.PXP|7246|2018-10-18T16:20:00|Hospital|null |IN |
|ACC.PXP|7246|2018-11-10T00:00:00|Hospital|Foundation|IN |
|ACC.PXP|7246|2018-11-11T00:00:00|null |Washington|OUT |
|ACC.PXP|7246|2018-11-12T00:00:00|Hospital|null |OUT |
|ACC.PXP|7246|2018-11-15T04:00:00|Home |null |IN |
|ACC.PXP|7246|2018-11-15T04:00:00|Home |null |IN |
|ACC.PXP|7246|2020-12-04T15:00:00|Care |Betel |OUT |
|ACC.PXP|7246|2020-13-04T15:00:00|Care |null |OUT |
+-------+----+-------------------+--------+----------+-----------+
... 我的预期输出会像
data2 = [
("ACC.PXP", "7246", "2018-11-10T00:00:00", "Hospital", "Foundation", "IN"),
("ACC.PXP", "7246", "2018-11-12T00:00:00", "Hospital", "Washington", "OUT"),
("ACC.PXP", "7246", "2018-11-15T04:00:00", "Home", None, "IN"),
("ACC.PXP", "7246", "2020-13-04T15:00:00", "Care", "Betel", "OUT"),
]
sdf = spark.createDataFrame(
data=data2, schema=["ID1", "ID2", "date", "type", "name", "record_type"]
)
sdf.orderBy(F.col("date")).show(truncate=False)
+-------+----+-------------------+--------+----------+-----------+
|ID1 |ID2 |date |type |name |record_type|
+-------+----+-------------------+--------+----------+-----------+
|ACC.PXP|7246|2018-11-10T00:00:00|Hospital|Foundation|IN |
|ACC.PXP|7246|2018-11-12T00:00:00|Hospital|Washington|OUT |
|ACC.PXP|7246|2018-11-15T04:00:00|Home |null |IN |
|ACC.PXP|7246|2020-13-04T15:00:00|Care |Betel |OUT |
+-------+----+-------------------+--------+----------+-----------+
我试过这个,它看起来适用于这个示例数据集。但是,当我测试实际数据时,逻辑似乎只选择了一个“IN”和一个“OUT”记录。任何意见都非常感谢。
w2 = Window.partitionBy("ID1", "ID2", "type", "date").orderBy(F.desc("date"))
w3 = Window.partitionBy("ID1", "ID2", "type").orderBy(F.asc("date"))
w4 = Window.partitionBy("ID1", "ID2", "type").orderBy(F.desc("date"))
df1 = (
df.withColumn(
"type",
when(col("type").isNotNull(), col("type")).otherwise(
last("type", True).over(w1)
),
)
.withColumn(
"name",
when(col("name").isNotNull(), col("name")).otherwise(
last("name", True).over(w1)
),
)
.withColumn("row_number", F.row_number().over(w2))
.filter(F.col("row_number") == 1)
.drop("row_number")
)
df2 = (
df1.withColumn(
"type",
when(col("type").isNotNull(), col("type")).otherwise(
last("type", True).over(w3)
),
)
.withColumn(
"name",
when(col("name").isNotNull(), col("name")).otherwise(
F.last("name", True).over(w3)
),
)
.withColumn("GroupingSeq", F.row_number().over(w4))
.filter(F.col("GroupingSeq") == 1)
.drop("GroupingSeq")
)
df2.orderBy(F.asc("date")).show()
【问题讨论】:
-
你考虑过使用
GroupBy方法吗?? -
我做了@Onyambu。这里的问题是我有多个按日期排序的 IN 和 OUT,我想为每个分组的 IN 和 OUT 捕获最新记录
标签: python-3.x apache-spark pyspark apache-spark-sql bigdata