【发布时间】:2019-01-25 00:00:10
【问题描述】:
我正在尝试首先在 CPU 上本地使用 TPUEstimator 运行模型,以通过在估计器初始化上设置 use_tpu=False 来验证它是否有效。运行 train 时出现此错误。
InternalError: failed to synchronously memcpy host-to-device: host 0x7fcc7e4d4000 to device 0x1deffc002 size 4096: Failed precondition: Unable to enqueue when not opened, queue: [0000:00:04.0 PE0 C0 MC0 TN0 Queue HBM_WRITE]. State is: CLOSED
[[Node: optimizer/gradients/neural_network/fully_connected_2/BiasAdd_grad/BiasAddGrad_G14 = _Recv[client_terminated=false, recv_device="/job:worker/replica:0/task:0/device:TPU:0", send_device="/job:worker/replica:0/task:0/device:CPU:0", send_device_incarnation=-7832507818616568453, tensor_name="edge_42_op...iasAddGrad", tensor_type=DT_FLOAT, _device="/job:worker/replica:0/task:0/device:TPU:0"]()]]
它看起来仍在尝试使用 TPU,正如它所说的 recv_device="/job:worker/replica:0/task:0/device:TPU:0"。为什么use_tpu设置为False时尝试使用TPU?
【问题讨论】:
标签: google-cloud-platform google-cloud-tpu