【问题标题】:Pytorch CUDA error: invalid configuration argumentPytorch CUDA 错误:配置参数无效
【发布时间】:2020-05-21 13:21:16
【问题描述】:

我最近在损失函数中添加了一个新组件。在 CPU 上运行新代码,但在 GPU 上运行时出现以下错误,显然与向后传递有关:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-12-56dcbddd5230> in <module>
     20 recall = Recall(N_RECALL_CAND, K)
     21 #run the model
---> 22 train_loss, val_loss = fit(triplet_train_loader, triplet_test_loader, model, loss_fn, optimizer, scheduler, N_EPOCHS, cuda, LOG_INT)
     23 #measure recall

~/thesis/trainer.py in fit(train_loader, val_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval, metrics, start_epoch)
     24         scheduler.step()
     25         # Train stage
---> 26         train_loss, metrics, writer_train_index = train_epoch(train_loader, model, loss_fn, optimizer, cuda, log_interval, metrics, writer, writer_train_index)
     27 
     28         message = 'Epoch: {}/{}. Train set: Average loss: {:.4f}'.format(epoch + 1, n_epochs, train_loss)

~/thesis/trainer.py in train_epoch(train_loader, model, loss_fn, optimizer, cuda, log_interval, metrics, writer, writer_train_index)
     80         losses.append(loss.item())
     81         total_loss += loss.item()
---> 82         loss.backward()
     83         optimizer.step()
     84 

/opt/anaconda3/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
    116                 products. Defaults to ``False``.
    117         """
--> 118         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    119 
    120     def register_hook(self, hook):

/opt/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     91     Variable._execution_engine.run_backward(
     92         tensors, grad_tensors, retain_graph, create_graph,
---> 93         allow_unreachable=True)  # allow_unreachable flag
     94 
     95 

RuntimeError: CUDA error: invalid configuration argument

这是导致它在损失函数中中断的代码副本:

def forward(self, anchor, positive, negative, model, size_average=True):
    #regular triplet loss. This works on GPU and CPU
    distance_positive = (anchor - positive).pow(2).sum(1)  # .pow(.5)
    distance_negative = (anchor - negative).pow(2).sum(1)  # .pow(.5)
    losses = F.relu(distance_positive - distance_negative + self.margin)

    #the additional component that causes the error. This will run on CPU but fails on GPU
    anchor_dists = torch.cdist(model.embedding_net.anchor_net.anchors, model.embedding_net.anchor_net.anchors)
    t = (self.beta * F.relu(self.rho - anchor_dists))
    regularization = t.sum() - torch.diag(t).sum()

    return regularization + losses.mean() if size_average else losses.sum()

错误发生在批量大小为 1 和第一次反向传递时。 here 的回答表明这与内存不足有关,但我的模型并不是特别大:

TripletNet(
  (embedding_net): EmbeddingNet(
    (anchor_net): AnchorNet(anchors torch.Size([128, 192]), biases torch.Size([128]))
    (embedding): Sequential(
      (0): AnchorNet(anchors torch.Size([128, 192]), biases torch.Size([128]))
      (1): Tanh()
    )
  )
)

我的 GPU 上的可用内存为 8GB,比模型和 cdist 结果的大小为 128x128 小得多。

我不知道如何开始调试它。如果是因为它跟踪中间状态而导致内存不足的情况,我该如何解决这个问题?任何帮助表示赞赏!

编辑:监控 GPU 内存使用情况表明我在崩溃时远远低于内存限制。

【问题讨论】:

  • 你找出原因了吗?
  • @dashesy 在下面看到我的回答

标签: python pytorch


【解决方案1】:

根据 pytroch 论坛上的this thread,升级到 pytorch 1.5.0 应该可以解决此问题

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2010-12-05
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多