【发布时间】:2019-04-15 19:07:09
【问题描述】:
在 Michael Nielson 的人工神经网络在线书籍http://neuralnetworksanddeeplearning.com 中,他提供了以下代码:
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
我无法理解带有 nabla_b 和 nabla_w 的部分。
如果delta_nabla_b 和delta_nabla_w 是成本函数的梯度,那么我们为什么要在这里将它们添加到nabla_b 和nabla_w 的现有值中?
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
我们不应该直接定义吗
nabla_b, nabla_w = self.backprop(x, y)
并更新权重和偏差矩阵?
我们制作nabla_b 和nabla_w 是不是因为我们想要对梯度进行平均并且它们是梯度总和的矩阵?
【问题讨论】:
标签: machine-learning neural-network deep-learning backpropagation gradient-descent