【问题标题】:How to efficiently join/merge/concatenate large data frame in pandas?如何在熊猫中有效地加入/合并/连接大数据框?
【发布时间】:2017-07-20 14:07:00
【问题描述】:

目标是创建一个大数据框架,我可以在该框架上执行操作,例如在列中平均每一行等。

问题是随着数据帧的增加,每次迭代的速度也会增加,所以我无法完成计算。

注意:我的df 只有两列,其中col1 是不必要的,因此我加入了它。 col1 是一个字符串,col2 是一个浮点数。行数为 3k。下面是一个例子:

folder_paths    float
folder/Path     1.12630137
folder/Path2    1.067517426
folder/Path3    1.06443264
folder/Path4    1.049119625
folder/Path5    1.039635769

问题关于如何提高代码效率以及瓶颈在哪里的任何想法?另外,我不确定merge 是否适合。

当前想法我正在考虑的一种解决方案是按内存分配并指定列类型:col1 是字符串,col2 是浮点数。

df = pd.DataFrame() # create an empty data frame

for i in range(1000):
    if i is 0:
        df = generate_new_df(arg1, arg2)
    else:
        df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')

我也尝试过使用pd.concat,但结果非常相似:每次迭代后时间都会增加

df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)

pd.concat 的结果

run 1
time 0.34s
run 2    
time 0.34s
run 3    
time 0.32s
run 4    
time 0.33s
run 5    
time 0.42s
run 6    
time 0.41s
run 7    
time 0.45s
run 8    
time 0.46s
run 9    
time 0.54s
run 10   
time 0.58s
run 11   
time 0.73s
run 12   
time 0.72s
run 13   
time 0.79s
run 14   
time 0.87s
run 15   
time 0.95s
run 16   
time 1.06s
run 17   
time 1.19s
run 18   
time 1.24s
run 19   
time 1.37s
run 20   
time 1.57s
run 21   
time 1.68s
run 22   
time 1.93s
run 23   
time 1.86s
run 24   
time 1.96s
run 25   
time 2.11s
run 26   
time 2.32s
run 27   
time 2.42s
run 28   
time 2.57s

使用列表中的dfListpd.concat 产生了类似的结果。下面是代码和结果。

dfList=[]
for i in range(1000):
    dfList.append(generate_new_df(arg1, arg2))

df = pd.concat(dfList, axis=1)

结果:

run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.

【问题讨论】:

  • 为什么要合并1000次?
  • 嗯,每个新的数据帧都来自一个单独的 csv 文件,所以我用我需要的信息生成了一个 df,这样的 csv 文件大约有 1k 或 10k 个。
  • 这些文件的格式是什么?为什么需要将它们合并回来而不是串联?
  • 正如我在问题中所写的那样,我不确定合并是否是最好的方法,所以无论哪个更快并完成工作都可以。格式为:colum1 (string) 和 colume2(float),每行约 3k 行
  • 查看类似问题here 的答案。它通过concatenating 批量模拟合并,效率更高。 (免责声明:这是我的答案)

标签: python pandas


【解决方案1】:

仍然有点不清楚您的问题到底是什么,但我假设主要瓶颈是您试图一次将大量数据帧加载到一个列表中并且您遇到了内存/分页问题。考虑到这一点,这里有一种方法可能会有所帮助,但您必须自己测试它,因为我无权访问您的 generate_new_df 函数或您的数据。

方法是使用 this answermerge_with_concat 函数的变体,最初将较小数量的数据帧合并在一起,然后一次将它们合并在一起。

例如,如果您有 1000 个数据帧,您可以一次将 100 个数据帧合并在一起,得到 10 个大数据帧,然后将最后的 10 个数据帧合并在一起作为最后一步。这应该确保您在任何时候都没有太大的数据框列表。

您可以使用以下两个函数(我假设您的 generate_new_df 函数将文件名作为其参数之一)并执行以下操作:

def chunk_dfs(file_names, chunk_size):
    """" yields n dataframes at a time where n == chunksize """
    dfs = []
    for f in file_names:
        dfs.append(generate_new_df(f))
        if len(dfs) == chunk_size:
            yield dfs
            dfs  = []
    if dfs:
        yield dfs


def merge_with_concat(dfs, col):                                             
    dfs = (df.set_index(col, drop=True) for df in dfs)
    merged = pd.concat(dfs, axis=1, join='outer', copy=False)
    return merged.reset_index(drop=False)

col_name = "name_of_column_to_merge_on"
file_names = ['list/of', 'file/names', ...]
chunk_size = 100

merged = merge_with_concat((merge_with_concat(dfs, col_name) for dfs in chunk_dfs(file_names, chunk_size)), col_name)

【讨论】:

    猜你喜欢
    • 2023-02-10
    • 2016-07-24
    • 2022-07-07
    • 1970-01-01
    • 2016-03-09
    • 2020-09-12
    • 1970-01-01
    • 2023-02-04
    • 2018-04-21
    相关资源
    最近更新 更多