【发布时间】:2020-02-23 16:51:07
【问题描述】:
打印(df)
A B
0 10
1 30
2 50
3 20
4 10
5 30
A B
0 10
1 30
A B
2 50
A B
3 20
4 10
5 30
【问题讨论】:
标签: pandas dataframe split threshold
打印(df)
A B
0 10
1 30
2 50
3 20
4 10
5 30
A B
0 10
1 30
A B
2 50
A B
3 20
4 10
5 30
【问题讨论】:
标签: pandas dataframe split threshold
您可以在 B 列的累积总和上使用 pd.cut:
th = 50
# find the cumulative sum of B
cumsum = df.B.cumsum()
# create the bins with spacing of th (threshold)
bins = list(range(0, cumsum.max() + 1, th))
# group by (split by) the bins
groups = pd.cut(cumsum, bins)
for key, group in df.groupby(groups):
print(group)
print()
输出
A B
0 0 10
1 1 30
A B
2 2 50
A B
3 3 20
4 4 10
5 5 30
【讨论】:
for loop 和numba 会这么快。
这是一个使用numba加速我们的for loop的方法:
我们检查何时达到限制并重置total 计数并分配一个新的group:
from numba import njit
@njit
def cumsum_reset(array, limit):
total = 0
counter = 0
groups = np.empty(array.shape[0])
for idx, i in enumerate(array):
total += i
if total >= limit or array[idx-1] == limit:
counter += 1
groups[idx] = counter
total = 0
else:
groups[idx] = counter
return groups
grps = cumsum_reset(df['B'].to_numpy(), 50)
for _, grp in df.groupby(grps):
print(grp, '\n')
输出
A B
0 0 10
1 1 30
A B
2 2 50
A B
3 3 20
4 4 10
5 5 30
时间安排:
# create dataframe of 600k rows
dfbig = pd.concat([df]*100000, ignore_index=True)
dfbig.shape
(600000, 2)
# Erfan
%%timeit
cumsum_reset(dfbig['B'].to_numpy(), 50)
4.25 ms ± 46.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# Daniel Mesejo
def daniel_mesejo(th, column):
cumsum = column.cumsum()
bins = list(range(0, cumsum.max() + 1, th))
groups = pd.cut(cumsum, bins)
return groups
%%timeit
daniel_mesejo(50, dfbig['B'])
10.3 s ± 2.17 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
结论,numba for 循环快了 24~ x 倍。
【讨论】:
groups 使用 numpy 数组而不是列表,您可以在 Numba 函数中获得很大的加速。
type of variable cannot be determined。 @max9111
groups = np.empty(array.shape[0],dtype=np.uint64)而不是groups = []分配一个数组,并使用groups[idx]=counter而不是groups.append(counter)将结果写入数组。
groups=np.array([]),然后是groups = np.append(groups, counter)。这给了我一个错误。 @max9111