【发布时间】:2021-11-09 20:30:28
【问题描述】:
我在 ubuntu 系统中使用 GPU tesla k80 训练了相同的 PyTorch 模型,我得到了大约 32% 的准确率,但是当我使用 CPU 运行它时,准确率是 43%。 还安装了 Cuda-toolkit 和 cudnn 库。 英伟达驱动: 470.63.01
nvcc 版本: 10.1
造成这种巨大差异的可能原因是什么?
有关更多详细信息,我使用此代码 https://github.com/copenlu/xformer-multi-source-domain-adaptation 并将其修改为我的问答问题 模型类是:
class MultiViewTransformerNetworkAveragingIndividuals(nn.Module):
Multi-view transformer network for domain adaptation
def __init__(self, bert_model, bert_config, n_domains: int = 2, n_classes: int = 2):
super(MultiViewTransformerNetworkAveragingIndividuals, self).__init__()
self.domain_experts = nn.ModuleList([AutoModelForQuestionAnswering.from_pretrained(bert_model,config=bert_config) for _ in range(n_domains)])
self.shared_bert = AutoModelForQuestionAnswering.from_pretrained(bert_model,config=bert_config)
self.n_domains = n_domains
self.n_classes = n_classes
# Default weight is averaging
self.weights = [1. / (self.n_domains + 1)] * (self.n_domains + 1)
self.average = False
def forward(
self,
input_ids: torch.LongTensor,
attention_mask: torch.LongTensor,
head_mask=None,
inputs_embeds=None,
start_positions=None,
end_positions=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
domains: torch.LongTensor = None,
return_logits: bool = False
):
outputs = self.shared_bert(input_ids, attention_mask,head_mask=head_mask,inputs_embeds=inputs_embeds,output_attentions=output_attentions,
output_hidden_states=output_hidden_states,return_dict=return_dict)
logits_shared_start = outputs[0]
logits_shared_end = outputs[1]
softmax = nn.Softmax()
if not self.average:
if domains is not None:
logits = self.domain_experts[domains[0]](input_ids, attention_mask,head_mask=head_mask,inputs_embeds=inputs_embeds,output_attentions=output_attentions,
output_hidden_states=output_hidden_states,return_dict=return_dict)
logits_start=logits[0]
logits_end=logits[1]
# b x n_dom(+1) x nclasses
start_preds = softmax(logits_start)
end_preds = softmax(logits_end)
else:
logits_start = logits_shared_start
logits_end = logits_shared_end
# b x n_dom(+1) x nclasses
start_preds = softmax(logits_start)
end_preds = softmax(logits_end)
else:
logits_private = [self.domain_experts[d](input_ids, attention_mask,head_mask=head_mask,inputs_embeds=inputs_embeds,output_attentions=output_attentions,
output_hidden_states=output_hidden_states,return_dict=return_dict) for d in
range(self.n_domains)]
logits_private_start=[log_private[0] for log_private in logits_private]
logits_private_end=[log_private[1] for log_private in logits_private]
logits_start = logits_private_start + [logits_shared_start]
logits_end = logits_private_end + [logits_shared_end]
if return_logits:
return (logits_start,logits_end)
attn = torch.FloatTensor(self.weights).view(1, -1, 1)
# b x n_dom(+1) x nclasses
start_preds = torch.stack([softmax(logs) for logs in logits_start], dim=1)
end_preds = torch.stack([softmax(logs) for logs in logits_end], dim=1)
# Apply attention
start_preds = torch.sum(start_preds * attn, dim=1)
end_preds = torch.sum(end_preds * attn, dim=1)
outputs = (start_preds,end_preds,)
loss=None
if start_positions is not None and end_positions is not None:
if len(start_positions.size()) > 1:
start_positions = start_positions.squeeze(-1)
if len(end_positions.size()) > 1:
end_positions = end_positions.squeeze(-1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
ignored_index = start_preds.size(1)
start_positions.clamp_(0, ignored_index)
end_positions.clamp_(0, ignored_index)
# LogSoftmax + NLLLoss
loss_fn = nn.NLLLoss()
xent = nn.CrossEntropyLoss()
s_loss = loss_fn(torch.log(start_preds), start_positions)
e_loss = loss_fn(torch.log(end_preds), end_positions)
loss_s=(s_loss+e_loss/2)
loss=loss_s
s_loss_t=xent(logits_shared_start, start_positions)
e_loss_t=xent(logits_shared_end, end_positions)
loss_t=(s_loss_t+e_loss_t)/2
loss+=loss_t
# Strong supervision on in domain
#if domains is not None:
return QuestionAnsweringModelOutput(
loss=loss,
start_logits=start_preds,
end_logits=end_preds,
)
当我运行此代码步骤 bt 步骤时,模型的输出(start_logits、end_logits 和 loss)因 cpu 到 gpu 运行而不同。
需要注意的是,种子在程序的第一个初始化为:
# Set all the seeds
seed = args.seed
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
并且结果在多次运行中不会改变。
【问题讨论】:
-
没有上下文细节是不可能回答的,比如模型架构、数据集、训练管道等。
-
我编辑了这个以包含更多细节
-
这不是不可能回答的,因为我们知道 op 正在使用 cudnn,这已经使可重复性成为问题。
-
我没有重现性问题,因为当我用 CPU 运行程序时总是得到 43% 的准确率,而当我用 GPU 运行程序时总是得到 32%。问题是 CPU 和 GPU 之间的差异