【问题标题】:Keras (tensorflow) finds GPU, but only runs on cpu w/ Cuda 10.1Keras (tensorflow) 找到 GPU,但只能在带有 Cuda 10.1 的 cpu 上运行
【发布时间】:2020-01-20 21:00:00
【问题描述】:

已经发布了很多关于此的问题,但没有一个能真正回答我的问题,或者与我遇到的问题略有不同。

我使用的是 ubuntu 18.04,并按照默认说明使用 CUDA 10.1 和 tensorflow-gpu 安装了 keras。

当运行 tensorflow 检测到我有 GPU 时,但是当我检查 cpu 与 gpu 的使用情况时,他似乎仍然只在 cpu 上运行。我遇到了this 线程并运行该脚本。它证实了我的猜测,他由于某种原因不能使用我的 gpu:

2019-09-19 21:05:57.730197: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-09-19 21:05:57.730247: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen some GPU libraries. Skipping registering GPU devices...
2019-09-19 21:05:57.730281: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-19 21:05:57.730303: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0 
2019-09-19 21:05:57.730317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N 
2019-09-19 21:05:57.922335: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

列出设备时会说:

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 57580461479478464
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 6376288845656491190
physical_device_desc: "device: XLA_GPU device"
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 17409275481256463364
physical_device_desc: "device: XLA_CPU device"
]

但是在日志中途,tensorflow 输出如下:

2019-09-19 20:44:32.676537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: GeForce GTX 860M major: 5 minor: 0 memoryClockRate(GHz): 1.0195
pciBusID: 0000:01:00.0

./deviceQuery 输出:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 860M"
  CUDA Driver Version / Runtime Version          10.1 / 10.1
  CUDA Capability Major/Minor version number:    5.0
  Total amount of global memory:                 2004 MBytes (2101870592 bytes)
  ( 5) Multiprocessors, (128) CUDA Cores/MP:     640 CUDA Cores
  GPU Max Clock rate:                            1020 MHz (1.02 GHz)
  Memory Clock rate:                             2505 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 2097152 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS

任何人都知道为什么 tensorflow 找不到我的 GPU 或如何使其可用?

提前致谢!

【问题讨论】:

  • 我认为是因为 cuda 10.1。关注here
  • 您是否 100% 确定它不在 GPU 上运行?根据型号的不同,GPU 使用率很可能永远不会超过 10%。
  • @TheGuywithTheHat 我不是 100% 确定,但是当我观看 nvidia-smi 时,python3 确实出现在底部的列表中,但最大 24mb 内存并且几乎没有使用,而我的 cpu 使用在顶部达到 700% 所以我猜不是?
  • @Hamed 这确实是因为错误的 cuda 版本!随意添加答案,我会投票并标记为解决方案;)

标签: python tensorflow keras


【解决方案1】:

这是因为 cuda 10.1。您需要降级到 cuda 10.0。

这是一个类似的solution

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2018-12-10
    • 1970-01-01
    • 2018-09-26
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多