【发布时间】:2021-05-21 19:09:55
【问题描述】:
这是我的 spark df 的一个例子:
+------------+-------------------+-------------------+
| OT| Fecha_Fst| Fecha_Lst|
+------------+-------------------+-------------------+
|712268242652|2021-01-30 14:43:00|2021-02-03 13:03:00|
|712268243525|2021-01-30 14:27:00|2021-02-03 14:50:00|
|712268243831|2021-02-02 21:23:00|2021-02-08 17:39:00|
|712268244225|2021-02-01 07:26:00|2021-02-09 11:22:00|
|712268244951|2021-02-01 07:25:00|2021-02-05 16:07:00|
|712268245076|2021-02-01 07:26:00|2021-02-06 13:22:00|
|712268245651|2021-01-28 16:49:00|2021-02-04 13:31:00|
|712268246782|2021-02-01 07:26:00|2021-02-05 12:24:00|
|712268247644|2021-02-02 18:20:00|2021-02-05 16:12:00|
|712268247681|2021-02-09 05:03:00|2021-02-15 14:16:00|
|712268247751|2021-02-02 15:42:00|2021-02-05 13:27:00|
|712268247854|2021-01-30 14:34:00|2021-01-30 14:34:00|
|712268248775|2021-02-02 15:42:00|2021-02-05 12:42:00|
|712268249173|2021-02-02 15:42:00|2021-02-05 15:51:00|
|712268249873|2021-02-02 09:05:00|2021-02-05 19:36:00|
|712268249884|2021-02-02 08:53:00|2021-02-05 19:36:00|
|712268249895|2021-02-02 08:14:00|2021-02-05 19:36:00|
|712268249906|2021-02-02 09:06:00|2021-02-05 19:36:00|
|712268249910|2021-02-02 08:53:00|2021-02-05 19:36:00|
|712268250186|2021-02-02 15:42:00|2021-02-05 18:59:00|
+------------+-------------------+-------------------+
我在网上找到了这段代码:
a = "2021-02-10T23:59:00.000+0000"
b = "2021-03-20T23:59:00.000+0000"
week = {}
def weekday_count(start, end):
start_date = datetime.datetime.strptime(start, "%Y-%m-%dT%H:%M:%S.%f%z")
end_date = datetime.datetime.strptime(end, "%Y-%m-%dT%H:%M:%S.%f%z")
for i in range((end_date - start_date).days):
day = calendar.day_name[(start_date + datetime.timedelta(days=i + 1)).weekday()]
week[day] = week[day] + 1 if day in week else 1
return week["Sunday"] + week["Saturday"]
print(weekday_count(a, b))
11
它工作正常,我得到了我想要的,但我不能在我的 spark df 中使用它我尝试了很多形式,但总是得到如下错误:
df = df.withColumn("Number", weekday_count(f.col("Fecha_Fst"),f.col("Fecha_Lst")))
TypeError: strptime() 参数 1 必须是 str,而不是 Column
如果我使用 lambda:
def weekday_count(start, end):
start_date = lambda start :datetime.datetime.strptime(start, "%Y-%m-%dT%H:%M:%S.%f%z")
end_date = lambda end :datetime.datetime.strptime(end, "%Y-%m-%dT%H:%M:%S.%f%z")
for i in range((end_date - start_date).days):
day = calendar.day_name[(start_date + datetime.timedelta(days=i + 1)).weekday()]
week[day] = week[day] + 1 if day in week else 1
return week["Sunday"] + week["Saturday"]
df = df.withColumn("Number", weekday_count(f.col("Fecha_Fst"),f.col("Fecha_Lst")))
TypeError: 不支持的操作数类型 -: 'function' 和 '功能'
等等……我今天试了很多形式,都没有得到想要的结果:
+------------+-------------------+-------------------+-----------+
| OT| Fecha_Fst| Fecha_Lst| Days|
+------------+-------------------+-------------------+-----------+
|712268242652|2021-01-30 14:43:00|2021-02-03 13:03:00| 2|
|712268243831|2021-02-02 21:23:00|2021-02-08 17:39:00| 2|
|712268244225|2021-02-01 07:26:00|2021-02-09 11:22:00| 2|
|712268244951|2021-02-01 07:25:00|2021-02-05 16:07:00| 0|
|712268247681|2021-02-09 05:03:00|2021-02-15 14:16:00| 2|
|712268247854|2021-01-30 14:34:00|2021-01-30 14:34:00| 1|
|712268248775|2021-02-02 15:42:00|2021-02-05 12:42:00| 0|
|712268249173|2021-02-02 15:42:00|2021-02-05 15:51:00| 0|
|712268249873|2021-02-02 09:05:00|2021-02-05 19:36:00| 0|
+------------+-------------------+-------------------+-----------+
我对如何使用 pyspark 库中的新列感到困惑,我使用了 pandas,但我目前正在使用 azure databricks 环境,pandas 非常慢。
【问题讨论】:
标签: python apache-spark pyspark apache-spark-sql