【发布时间】:2020-10-05 00:07:53
【问题描述】:
我正在尝试理解带有梯度下降的线性回归,但我在下面的 loss_gradients 函数中不理解这部分。
import numpy as np
def forward_linear_regression(X, y, weights):
# dot product weights * inputs
N = np.dot(X, weights['W'])
# add bias
P = N + weights['B']
# compute loss with MSE
loss = np.mean(np.power(y - P, 2))
forward_info = {}
forward_info['X'] = X
forward_info['N'] = N
forward_info['P'] = P
forward_info['y'] = y
return loss, forward_info
这是我的理解陷入困境的地方,我已经注释掉了我的问题:
def loss_gradients(forward_info, weights):
# to update weights, we need: dLdW = dLdP * dPdN * dNdW
dLdP = -2 * (forward_info['y'] - forward_info['P'])
dPdN = np.ones_like(forward_info['N'])
dNdW = np.transpose(forward_info['X'], (1, 0))
dLdW = np.dot(dNdW, dLdP * dPdN)
# why do we mix matrix multiplication and dot product like this?
# Why not dLdP * dPdN * dNdW instead?
# to update biases, we need: dLdB = dLdP * dPdB
dPdB = np.ones_like(forward_info[weights['B']])
dLdB = np.sum(dLdP * dPdB, axis=0)
# why do we sum those values along axis 0?
# why not just dLdP * dPdB ?
【问题讨论】:
-
你从哪里得到 loss_gradient 函数?好像不是你自己写的。是教科书上的吗?
-
是的,它来自 S. Weidman 的《从零开始的深度学习》一书中
-
请在您的帖子中包含此类信息 - 这也是正确归属的问题
标签: python machine-learning scikit-learn linear-regression gradient-descent