【发布时间】:2022-01-30 00:32:37
【问题描述】:
所以我正在尝试训练我的 BigBird 模型 (BigBirdForSequenceClassification),我到了训练的那一刻,它以以下错误消息结束:
Traceback (most recent call last):
File "C:\Users\######", line 189, in <module>
train_loss, _ = train()
File "C:\Users\######", line 152, in train
loss = cross_entropy(preds, labels)
File "C:\Users\#####\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\######\venv\lib\site-packages\torch\nn\modules\loss.py", line 211, in forward
return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
File "C:\Users\######\venv\lib\site-packages\torch\nn\functional.py", line 2532, in nll_loss
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not tuple
据我了解,问题的发生是因为 train() 函数返回元组。现在 - 我的问题是我应该如何处理这样的问题?如何更改 train() 函数的输出以返回张量而不是元组? 我在这里看到过类似的问题,但似乎没有一个解决方案对我有帮助,甚至没有
model = BigBirdForSequenceClassification(config).from_pretrained(checkpoint, return_dict=False)
(当我不添加 return_dict=False 时,我收到了类似的错误消息,但它显示“TypeError: nll_loss_nd(): argument 'input' (position 1) must be Tensor, not SequenceClassifierOutput”
请在下面查看我的火车代码:
def train():
model.train()
total_loss = 0
total_preds = []
for step, batch in enumerate(train_dataloader):
if step % 10 == 0 and not step == 0:
print('Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader)))
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
preds = model(sent_id, mask)
loss = cross_entropy(preds, labels)
total_loss = total_loss + loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
optimizer.zero_grad()
preds = preds.detach().cpu().numpy()
total_preds.append(preds)
avg_loss = total_loss / len(train_dataloader)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
然后:
for epoch in range(epochs):
print('\n Epoch {:} / {:}'.format(epoch + 1, epochs))
train_loss, _ = train()
train_losses.append(train_loss)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
非常感谢您对此案的任何帮助,并提前感谢您!
【问题讨论】:
标签: python pytorch huggingface-transformers bert-language-model