【问题标题】:Error installing tensorflow, serving in ubuntu安装 tensorflow 时出错,在 ubuntu 中服务
【发布时间】:2017-01-30 08:56:42
【问题描述】:

我正在安装 Tensorflow 服务,为此我必须在 ubuntu 中安装 tensorflow。我在tf 根目录中运行了./configure 命令。 这是输出:

Please specify the location of python. [Default is /usr/bin/python]: 
Please specify optimization flags to use during compilation [Default is -march=native]:        
Do you wish to use jemalloc as the malloc implementation? [Y/n] y
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] y
Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] y
Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] y
XLA JIT support will be enabled for TensorFlow
Found possible Python library paths:
  /usr/local/lib/python2.7/dist-packages
  /usr/lib/python2.7/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]

Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] y
OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 
Please specify the location where CUDA  toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 
Please specify the location where cuDNN  library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: 
Invalid C++ compiler path.  cannot be found
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: /usr/bin/g++
Please specify which C compiler should be used as the host C compiler. [Default is ]: /usr/bin/gcc
Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]: 
.................................................................
INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes.
.........
ERROR: package contains errors: tensorflow/stream_executor.
ERROR: error loading package 'tensorflow/stream_executor': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '@local_config_cuda//cuda': Traceback (most recent call last):
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 813
        _create_cuda_repository(repository_ctx)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 727, in _create_cuda_repository
        _get_cuda_config(repository_ctx)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 584, in _get_cuda_config
        _cudnn_version(repository_ctx, cudnn_install_base..., ...)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 295, in _cudnn_version
        _find_cuda_define(repository_ctx, cudnn_install_base..., ...)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 270, in _find_cuda_define
        auto_configure_fail("Cannot find cudnn.h at %s" % st...))
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 93, in auto_configure_fail
        fail("
%sAuto-Configuration Error:%s ...))

Auto-Configuration Error: Cannot find cudnn.h at /usr/lib/x86_64-linux-gnu/include/cudnn.h
.

没有名为/usr/lib/x86_64-linux-gnu/include 的文件夹。我确实有libcudnn.so 文件在/usr/lib/x86_64-linux-gnu/cudnn.h 在文件夹/usr/include 中。我不知道配置文件是如何生成路径的,但它找不到 cudnn,尽管我已经成功安装了 caffe,其 CMakeLists.txt 可以轻松找到 cuda 和 cudnn 安装的路径。我该如何解决这个问题?

【问题讨论】:

  • 这听起来像是 Github issue github.com/tensorflow/tensorflow/issues/6850 。你能在 Tensorflow head 上再试一次,看看问题是否解决了吗?如果没有,请跟进那个 github 问题。
  • 您的系统中有 NVIDIA GPU。如果是,当你输入 nvidia-smi 和 nvcc -V 时你会得到什么??

标签: ubuntu tensorflow tensorflow-serving


【解决方案1】:

假设您确实安装了cudnn
使用 -
which nvcc

找到你的 cuda 的安装位置

在我的情况下它返回 - /usr/local/cuda-6.5/bin/nvcc

所以cudnn.h位于/usr/local/cuda-6.5/include如果安装了cudnn)

在配置 tensorflow 时,系统会询问您 -
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

这里你必须明确指定cudnn的位置。
就我而言,它是/usr/local/cuda-6.5/include/

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2020-06-29
    • 1970-01-01
    • 2021-04-11
    • 2013-03-12
    • 2020-01-16
    • 1970-01-01
    • 1970-01-01
    • 2020-04-09
    相关资源
    最近更新 更多