【发布时间】:2020-12-09 22:57:17
【问题描述】:
这里是从 Numba/cuda 开始的新手。 我写了这个小测试脚本来比较@jit 和@cuda.jit。速度,只是为了感受一下。它为 256 个单独的实例计算逻辑方程的 10M 步。 cuda 部分大约需要 1.2 秒才能完成。 cpu 'jitted' 部分在接近 5 秒内完成(cpu 上只使用了一个线程)。 因此,从 GPU(专用 GTX1080TI 不做任何其他事情)开始,大约有 4 倍的加速。我预计并行执行所有 256 个实例的 cuda 部分会快得多。我做错了什么?
以下是工作示例:
#!/usr/bin/python3
#logistic equation on gpu/cpu comparison
import os,sys
#Set environment variables (needed for numba 0.42 to find lvvm)
os.environ['NUMBAPRO_NVVM'] = '/usr/lib/x86_64-linux-gnu/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/lib/nvidia-cuda-toolkit/libdevice/'
from time import time
from scipy import *
from numba import cuda, jit
from numba import int64,int32, float64
@cuda.jit
def logistic_cuda(array_in,array_out):
pos = cuda.grid(1)
x = array_in[pos]
for k in range(10*1000*1000):
x = 3.9 * x * (1 - x)
array_out[pos] = x
@jit
def logistic_cpu(array_in,array_out):
for pos,x in enumerate(array_in):
for k in range(10*1000*1000):
x = 3.9 * x * (1 - x)
array_out[pos] = x
if __name__ == '__main__':
N=256
in_ary = random.uniform(low=0.2,high=0.9,size=N).astype('float32')
out_ary = zeros(N,dtype='float32')
t0 = time()
#explicit copying. not really needed
d_in_ary = cuda.to_device(in_ary)
d_out_ary = cuda.to_device(out_ary)
t1 = time()
logistic_cuda[1,N](d_in_ary,d_out_ary)
cuda.synchronize()
t2 = time()
out_ary = d_out_ary.copy_to_host()
t3 = time()
print(out_ary)
print('Total time cuda: %g seconds.'%(t3-t0))
out_ary2 = zeros(N)
t4 = time()
logistic_cpu(in_ary,out_ary2)
t5 = time()
print('Total time cpu: %g seconds.'%(t5-t4))
print('\nDifference:')
print(out_ary2-out_ary)
#Total time cuda: 1.19364 seconds.
#Total time cpu: 5.01788 seconds.
谢谢!
【问题讨论】:
-
4X 对我来说听起来很合理。它不会是 256 倍。有相当多的开销。
-
在这种情况下与 gpu 之间的复制量约为 17 毫秒,所以其余的都在编译中?
标签: python performance numba