【发布时间】:2019-04-19 06:27:42
【问题描述】:
我想在反向传递之前累积梯度。所以想知道正确的做法是什么。根据this article 它是:
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad()
而我期望它是:
model.zero_grad() # Reset gradients tensors
loss = 0
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss += loss_function(predictions, labels) # Compute loss function
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
optimizer.step() # Now we can do an optimizer step
model.zero_grad()
loss = 0
我在这里累积损失,然后除以累积步骤来平均它。
第二个问题,如果我是对的,考虑到我只在每个累积步骤中进行反向传递,你会期望我的方法更快吗?
【问题讨论】:
标签: python pytorch gradient-descent