【发布时间】:2017-07-20 14:07:00
【问题描述】:
目标是创建一个大数据框架,我可以在该框架上执行操作,例如在列中平均每一行等。
问题是随着数据帧的增加,每次迭代的速度也会增加,所以我无法完成计算。
注意:我的df 只有两列,其中col1 是不必要的,因此我加入了它。 col1 是一个字符串,col2 是一个浮点数。行数为 3k。下面是一个例子:
folder_paths float
folder/Path 1.12630137
folder/Path2 1.067517426
folder/Path3 1.06443264
folder/Path4 1.049119625
folder/Path5 1.039635769
问题关于如何提高代码效率以及瓶颈在哪里的任何想法?另外,我不确定merge 是否适合。
当前想法我正在考虑的一种解决方案是按内存分配并指定列类型:col1 是字符串,col2 是浮点数。
df = pd.DataFrame() # create an empty data frame
for i in range(1000):
if i is 0:
df = generate_new_df(arg1, arg2)
else:
df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')
我也尝试过使用pd.concat,但结果非常相似:每次迭代后时间都会增加
df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)
pd.concat 的结果
run 1
time 0.34s
run 2
time 0.34s
run 3
time 0.32s
run 4
time 0.33s
run 5
time 0.42s
run 6
time 0.41s
run 7
time 0.45s
run 8
time 0.46s
run 9
time 0.54s
run 10
time 0.58s
run 11
time 0.73s
run 12
time 0.72s
run 13
time 0.79s
run 14
time 0.87s
run 15
time 0.95s
run 16
time 1.06s
run 17
time 1.19s
run 18
time 1.24s
run 19
time 1.37s
run 20
time 1.57s
run 21
time 1.68s
run 22
time 1.93s
run 23
time 1.86s
run 24
time 1.96s
run 25
time 2.11s
run 26
time 2.32s
run 27
time 2.42s
run 28
time 2.57s
使用列表中的dfList 和pd.concat 产生了类似的结果。下面是代码和结果。
dfList=[]
for i in range(1000):
dfList.append(generate_new_df(arg1, arg2))
df = pd.concat(dfList, axis=1)
结果:
run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.
【问题讨论】:
-
为什么要合并1000次?
-
嗯,每个新的数据帧都来自一个单独的 csv 文件,所以我用我需要的信息生成了一个 df,这样的 csv 文件大约有 1k 或 10k 个。
-
这些文件的格式是什么?为什么需要将它们合并回来而不是串联?
-
正如我在问题中所写的那样,我不确定合并是否是最好的方法,所以无论哪个更快并完成工作都可以。格式为:colum1 (string) 和 colume2(float),每行约 3k 行
-
查看类似问题here 的答案。它通过
concatenating 批量模拟合并,效率更高。 (免责声明:这是我的答案)