【发布时间】:2021-01-28 23:11:19
【问题描述】:
我知道,在使用loss.backward() 时,如果有多个网络和多个损失函数来分别优化每个网络,我们需要指定retain_graph=True。但即使有(或没有)指定此参数,我也会收到错误消息。以下是重现该问题的 MWE(在 PyTorch 1.6 上)。
import torch
from torch import nn
from torch import optim
torch.autograd.set_detect_anomaly(True)
class GRU1(nn.Module):
def __init__(self):
super(GRU1, self).__init__()
self.brnn = nn.GRU(input_size=2, bidirectional=True, num_layers=1, hidden_size=100)
def forward(self, x):
return self.brnn(x)
class GRU2(nn.Module):
def __init__(self):
super(GRU2, self).__init__()
self.brnn = nn.GRU(input_size=200, bidirectional=True, num_layers=1, hidden_size=1)
def forward(self, x):
return self.brnn(x)
gru1 = GRU1()
gru2 = GRU2()
gru1_opt = optim.Adam(gru1.parameters())
gru2_opt = optim.Adam(gru2.parameters())
criterion = nn.MSELoss()
for i in range(100):
gru1_opt.zero_grad()
gru2_opt.zero_grad()
vector = torch.randn((15, 100, 2))
gru1_output, _ = gru1(vector) # (15, 100, 200)
loss_gru1 = criterion(gru1_output, torch.randn((15, 100, 200)))
loss_gru1.backward(retain_graph=True)
gru1_opt.step()
gru1_output, _ = gru1(vector) # (15, 100, 200)
gru2_output, _ = gru2(gru1_output) # (15, 100, 2)
loss_gru2 = criterion(gru2_output, torch.randn((15, 100, 2)))
loss_gru2.backward(retain_graph=True)
gru2_opt.step()
print(f"GRU1 loss: {loss_gru1.item()}, GRU2 loss: {loss_gru2.item()}")
将retain_graph 设置为True 时出现错误
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 300]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
不带参数的错误是
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
这是预期的。
请指出上面代码中需要更改的内容才能开始训练。任何帮助表示赞赏。
【问题讨论】:
-
您是否尝试将
optimizer.step()都放在最后?基本上首先使用retain_graph=True执行两个后向步骤,最后,您可以step两个优化器。此外,即使对于一个最小的可验证示例,最好在循环之外声明优化器和模型(为了正确性和避免混淆)。 -
@akshayk07 它会起作用,但这超出了目的。我想要做的是更新网络参数,然后在第一次调用
step()时获得“更好的估计”。是的,感谢您指出,我已经对 MWE 进行了更改 -
@akshayk07 很抱歉。我错过了一条线。现在有意义吗?所以我要做的是在第一遍中更新网络 1,并在第二遍中使用更新后的输出来改善网络 2
-
我认为你可以在第二次使用
gru1之后detach(),因为你的第二个优化器只更新gru2,即只需要gru2的渐变。 -
@akshayk07 这行得通,你能把它记下来,让我把它标记为已接受
标签: python-3.x machine-learning deep-learning pytorch torch