对于这样的事情,我会使用np.einsum,这样可以很容易地根据你想要的索引操作写下你想要发生的事情:
fast = np.einsum('ij,jkl->ikl', A, B)
这给了我相同的结果(丢弃 50000->500 以便快速完成循环):
A = np.random.random((500, 2000))
B = np.random.random((2000, 10, 10))
finalarray = np.zeros((500, 10, 10))
for k in range(500):
temp = A[k,:].reshape(2000,1,1)
finalarray[k,:,:]=np.sum ( B*temp , axis=0)
fast = np.einsum('ij,jkl->ikl', A, B)
给我
In [81]: (finalarray == fast).all()
Out[81]: True
即使在 50000 的情况下也有合理的性能:
In [88]: %time fast = np.einsum('ij,jkl->ikl', A, B)
Wall time: 4.93 s
In [89]: fast.shape
Out[89]: (50000, 10, 10)
或者,在这种情况下,您可以使用tensordot:
faster = np.tensordot(A, B, axes=1)
这将快几倍(以不那么通用为代价):
In [29]: A = np.random.random((50000, 2000))
In [30]: B = np.random.random((2000, 10, 10))
In [31]: %time fast = np.einsum('ij,jkl->ikl', A, B)
Wall time: 5.08 s
In [32]: %time faster = np.tensordot(A, B, axes=1)
Wall time: 504 ms
In [33]: np.allclose(fast, faster)
Out[33]: True
我不得不在这里使用allclose,因为这些值最终会略有不同:
In [34]: abs(fast - faster).max()
Out[34]: 2.7853275241795927e-12