【发布时间】:2018-06-16 02:22:45
【问题描述】:
下面是神经网络反向传播的前向传播和部分实现的反向传播:
import numpy as np
def sigmoid(z):
return 1 / (1 + np.exp(-z))
X_train = np.asarray([[1,1], [0,0]]).T
Y_train = np.asarray([[1], [0]]).T
hidden_size = 2
output_size = 1
learning_rate = 0.1
forward propagation
w1 = np.random.randn(hidden_size, 2) * 0.1
b1 = np.zeros((hidden_size, 1))
w2 = np.random.randn(output_size, hidden_size) * 0.1
b2 = np.zeros((output_size, 1))
Z1 = np.dot(w1, X_train) + b1
A1 = sigmoid(Z1)
Z2 = np.dot(w2, A1) + b2
A2 = sigmoid(Z2)
derivativeA2 = A2 * (1 - A2)
derivativeA1 = A1 * (1 - A1)
first steps of back propagation
error = (A2 - Y_train)
dA2 = error / derivativeA2
dZ2 = np.multiply(dA2, derivativeA2)
背后的直觉是什么:
error = (A2 - Y_train)
dA2 = error / derivativeA2
dZ2 = np.multiply(dA2, derivativeA2)
我理解错误是当前预测 A2 和实际值 Y_train 之间的差异。
但是为什么要将此误差除以 A2 的导数,然后将 error / derivativeA2 的结果乘以 derivativeA2 呢?这背后的直觉是什么?
【问题讨论】:
标签: numpy machine-learning neural-network logistic-regression backpropagation