【发布时间】:2021-05-16 12:57:01
【问题描述】:
我在 PyTorch 中做一个策略梯度方法。我想将网络更新移动到循环中,但它停止了工作。我仍然是 PyTorch 新手,如果解释很明显,我很抱歉。
这是有效的原始代码:
self.policy.optimizer.zero_grad()
G = T.tensor(G, dtype=T.float).to(self.policy.device)
loss = 0
for g, logprob in zip(G, self.action_memory):
loss += -g * logprob
loss.backward()
self.policy.optimizer.step()
变化之后:
G = T.tensor(G, dtype=T.float).to(self.policy.device)
loss = 0
for g, logprob in zip(G, self.action_memory):
loss = -g * logprob
self.policy.optimizer.zero_grad()
loss.backward()
self.policy.optimizer.step()
我得到错误:
File "g:\VScode_projects\pytorch_shenanigans\policy_gradient.py", line 86, in learn
loss.backward()
File "G:\Anaconda3\envs\pytorch_env\lib\site-packages\torch\tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "G:\Anaconda3\envs\pytorch_env\lib\site-packages\torch\autograd\__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 4]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
我读到这个 RuntimeError 通常与必须克隆某些东西有关,因为我们使用与 compute itself 相同的张量,但我无法确定我的情况有什么问题。
【问题讨论】: