方法#1
根据从OP's comments 获取的新信息,指出只有y 是实时变化的,我们可以对x 周围的大量内容进行预处理,从而做得更好。我们将创建一个散列数组来存储阶梯掩码。对于涉及y 的部分,我们将简单地使用从searchsorted 获得的索引来索引散列数组,这将近似于最终的掩码数组。鉴于 numba 的参差不齐的性质,分配剩余布尔值的最后一步可以卸载。如果我们决定扩大y 的长度,这也应该是有益的。
让我们看看实现。
使用x 进行预处理:
sidx = x.argsort()
ssidx = x.argsort().argsort()
# Choose a scale factor.
# 1. A small one would store more mapping info, hence faster but occupy more mem
# 2. A big one would store less mapping info, hence slower, but memory efficient.
scale_factor = 100
mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx
y 的剩余步骤:
import numba as nb
@nb.njit(parallel=True,fastmath=True)
def array_masking3(out, starts, idx, sidx):
N = len(out)
for i in nb.prange(N):
for j in nb.prange(starts[i], idx[i]):
out[i,sidx[j]] = True
return out
idx = np.searchsorted(x,y,sorter=sidx)
s0 = idx//scale_factor
starts = s0*scale_factor
out = mapar[s0]
out = array_masking3(out, starts, idx, sidx)
基准测试
In [2]: x = np.random.rand(1000000)
...: y = np.random.rand(200)
In [3]: ## Pre-processing step with "x"
...: sidx = x.argsort()
...: ssidx = x.argsort().argsort()
...: scale_factor = 100
...: mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx
In [4]: %%timeit
...: idx = np.searchsorted(x,y,sorter=sidx)
...: s0 = idx//scale_factor
...: starts = s0*scale_factor
...: out = mapar[s0]
...: out = array_masking3(out, starts, idx, sidx)
41 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# A 1/10th smaller hashing array has similar timings
In [7]: scale_factor = 1000
...: mapar = np.arange(0,len(x),scale_factor)[:,None] > ssidx
In [8]: %%timeit
...: idx = np.searchsorted(x,y,sorter=sidx)
...: s0 = idx//scale_factor
...: starts = s0*scale_factor
...: out = mapar[s0]
...: out = array_masking3(out, starts, idx, sidx)
40.6 ms ± 196 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# @silgon's soln
In [5]: %timeit x[np.newaxis,:] < y[:,np.newaxis]
138 ms ± 896 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
方法#2
这很好地借鉴了OP's solution。
import numba as nb
@nb.njit(parallel=True)
def array_masking2(mask1D, mask_out, idx, pt):
n = len(idx)
for j in nb.prange(len(pt)):
if mask1D[j]:
for i in nb.prange(pt[j],n):
mask_out[j, idx[i]] = False
else:
for i in nb.prange(pt[j]):
mask_out[j, idx[i]] = True
return mask_out
def app2(idx, pt):
m,n = len(pt), len(idx)
mask1 = pt>len(x)//2
mask2 = np.broadcast_to(mask1[:,None], (m,n)).copy()
return array_masking2(mask1, mask2, idx, pt)
因此,想法是,我们有超过一半的索引要设置True,我们切换到设置False,而不是在将这些行预先分配为所有True。这会减少内存访问,从而显着提升性能。
基准测试
OP的解决方案:
@nb.njit(parallel=True,fastmath=True)
def array_masking(mask, idx, pt):
for j in nb.prange(pt.shape[0]):
for i in nb.prange(pt[j]):
mask[j, idx[i]] = True
return mask
def app1(idx, pt):
m,n = len(pt), len(idx)
mask = np.zeros((m, n), dtype='bool')
return array_masking(mask, idx, pt)
时间安排 -
In [5]: np.random.seed(0)
...: x = np.random.rand(1000000)
...: y = np.random.rand(200)
In [6]: %timeit app1(idx, pt)
264 ms ± 8.91 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [7]: %timeit app2(idx, pt)
165 ms ± 3.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)