您可以将 group by 与聚合函数 First 结合使用,并将标志 ingnorenulls 设置为 true
import pyspark.sql.functions as F
from pyspark.sql import Window
data = [
{"Car": 1, "Time": 1, "Val1": None, "Val2": 1.5, "Val3": None},
{"Car": 1, "Time": 1, "Val1": 3.5, "Val2": None, "Val3": None},
{"Car": 1, "Time": 1, "Val1": None, "Val2": None, "Val3": 3.4},
{"Car": 1, "Time": 2, "Val1": 2.5, "Val2": None, "Val3": None},
{"Car": 1, "Time": 2, "Val1": None, "Val2": 6.0, "Val3": None},
{"Car": 1, "Time": 2, "Val1": None, "Val2": None, "Val3": 7.3},
{"Car": 2, "Time": 3, "Val1": None, "Val2": None, "Val3": 9.2},
]
df = spark.createDataFrame(data)
df.groupBy("Car", "Time").agg(
F.first("Val1", ignorenulls=True).alias("Val1"),
F.first("Val2", ignorenulls=True).alias("Val1"),
F.first("Val3", ignorenulls=True).alias("Val1"),
).show()
我添加了一个额外的行只是为了检查它只有一个条目的行为,我觉得很好
输出是
+---+----+----+----+----+
|Car|Time|Val1|Val1|Val1|
+---+----+----+----+----+
| 1| 1| 3.5| 1.5| 3.4|
| 1| 2| 2.5| 6.0| 7.3|
| 2| 3|null|null| 9.2|
+---+----+----+----+----+