【发布时间】:2020-12-13 19:54:59
【问题描述】:
假设我有一个 pyspark 数据框 (df1),其中包含一些用户的信息:
+--------+--------+--------+--------+
|user_id |event_id|code |City |
+--------+--------+--------+--------+
| user1| event1 | ABC | LA |
| user1| event2 | ABC | NYC |
| user2| event3 | DEF | LA |
| user2| event4 | GHK | LA |
| user3| event5 | DEF | NYC |
| user3| event6 | DEF | NYC |
| user3| event7 | ABC | LA |
+--------+--------+--------+--------+
在这个数据框中,我们有重复的 user_ids,但 event_ids 在数据集中是唯一的。此外,每个用户的代码和城市可以相同或不同。根据上表,我还有另一个像这样的 pyspark 数据框(df2):
+----------+----------+------------+
|event_id1 |event_id2 | user_match |
+----------+----------+------------+
| event1 | event2 | Ture |
| event1 | event4 | False |
| event2 | event3 | False |
| event2 | event7 | False |
| event5 | event6 | True |
| event6 | event1 | False |
+----------+----------+------------+
如您所见,我没有所有组合。目标是以这种方式基于他们的代码和城市进行特征提取(以检测用户):
+----------+----------+------------+--------+--------+
|event_id1 |event_id2 | user_match |code |City |
+----------+----------+------------+--------+--------+
| event1 | event2 | Ture | Ture | False |
| event1 | event4 | False | False | Ture |
| event2 | event3 | False | False | False |
| event2 | event7 | False | Ture | False |
| event5 | event6 | True | Ture | Ture |
| event6 | event1 | False | False | False |
+----------+----------+------------+--------+--------+
我在 PySpark 中使用 Pandas 实现了这一点。但我想知道如何仅使用 PySpark API 编写它:
%spark2.pyspark
# select all or part of train pairs
num_train_samples = pdf2.shape[0]
feats_train_array = pdf2[0:num_train_samples]
# define a temp array
feats = np.zeros((num_train_samples, 1))
# list of feats
#
feats_titles = ["code", "City"]
# extract features
#
for ft in feats_titles:
fvar = ft
for i in range(num_train_samples):
# read rows related to pairs
info_pair0 = pdf1.loc[pdf1['eventId'] == pdf2[i][0]]
info_pair1 = pdf1.loc[pdf1['eventId'] == pdf2[i][1]]
# compare values
feats_pair0 = (info_pair0[fvar].reset_index(drop=True)).iloc[0]
feats_pair1 = (info_pair1[fvar].reset_index(drop=True)).iloc[0]
if (feats_pair0==feats_pair1):
feats[i] = 1
else:
feats[i] = 0
feats_train_array = np.append(feats_train_array, feats, axis=1)
我认为这将是使用 PySpark API 的更简单的代码,但我想不通。
【问题讨论】:
标签: pyspark feature-extraction