【问题标题】:Correct backpropagation in simple perceptron简单感知器中的正确反向传播
【发布时间】:2019-09-28 00:10:10
【问题描述】:

鉴于简单的或门问题:

or_input = np.array([[0,0], [0,1], [1,0], [1,1]])
or_output = np.array([[0,1,1,1]]).T

如果我们训练一个简单的单层感知器(没有反向传播),我们可以这样做:

import numpy as np
np.random.seed(0)

def sigmoid(x): # Returns values that sums to one.
    return 1 / (1 + np.exp(-x))

def cost(predicted, truth):
    return (truth - predicted)**2

or_input = np.array([[0,0], [0,1], [1,0], [1,1]])
or_output = np.array([[0,1,1,1]]).T

# Define the shape of the weight vector.
num_data, input_dim = or_input.shape
# Define the shape of the output vector. 
output_dim = len(or_output.T)

num_epochs = 50 # No. of times to iterate.
learning_rate = 0.03 # How large a step to take per iteration.

# Lets standardize and call our inputs X and outputs Y
X = or_input
Y = or_output
W = np.random.random((input_dim, output_dim))

for _ in range(num_epochs):
    layer0 = X
    # Forward propagation.
    # Inside the perceptron, Step 2. 
    layer1 = sigmoid(np.dot(X, W))

    # How much did we miss in the predictions?
    cost_error = cost(layer1, Y)

    # update weights
    W +=  - learning_rate * np.dot(layer0.T, cost_error)

# Expected output.
print(Y.tolist())
# On the training data
print([[int(prediction > 0.5)] for prediction in layer1])

[出]:

[[0], [1], [1], [1]]
[[0], [1], [1], [1]]

通过反向传播,计算d(cost)/d(X)以下步骤是否正确?

  • 通过将成本误差与成本的导数相乘来计算 layer1 误差

  • 然后通过将第 1 层误差与 sigmoid 的导数相乘来计算第 1 层增量

  • 然后在输入和 layer1 delta 之间做一个点积,以获得 d(cost)/d(X) 的差异。

然后将d(cost)/d(X)乘以学习率的负数,进行梯度下降。

num_epochs = 0 # No. of times to iterate.
learning_rate = 0.03 # How large a step to take per iteration.

# Lets standardize and call our inputs X and outputs Y
X = or_input
Y = or_output
W = np.random.random((input_dim, output_dim))

for _ in range(num_epochs):
    layer0 = X
    # Forward propagation.
    # Inside the perceptron, Step 2. 
    layer1 = sigmoid(np.dot(X, W))

    # How much did we miss in the predictions?
    cost_error = cost(layer1, Y)

    # Back propagation.
    # multiply how much we missed from the gradient/slope of the cost for our prediction.
    layer1_error = cost_error * cost_derivative(cost_error)

    # multiply how much we missed by the gradient/slope of the sigmoid at the values in layer1
    layer1_delta = layer1_error * sigmoid_derivative(layer1)

    # update weights
    W +=  - learning_rate * np.dot(layer0.T, layer1_delta)

在这种情况下,使用cost_derivativesigmoid_derivative 的实现是否应该如下所示?

import numpy as np
np.random.seed(0)

def sigmoid(x): # Returns values that sums to one.
    return 1 / (1 + np.exp(-x))

def sigmoid_derivative(sx):
    # See https://math.stackexchange.com/a/1225116
    return sx * (1 - sx)

def cost(predicted, truth):
    return (truth - predicted)**2

def cost_derivative(y):
    # If the cost is:
    # cost = y - y_hat
    # What's the derivative of d(cost)/d(y)
    # d(cost)/d(y) = 1
    return 2*y


or_input = np.array([[0,0], [0,1], [1,0], [1,1]])
or_output = np.array([[0,1,1,1]]).T

# Define the shape of the weight vector.
num_data, input_dim = or_input.shape
# Define the shape of the output vector. 
output_dim = len(or_output.T)

num_epochs = 5 # No. of times to iterate.
learning_rate = 0.03 # How large a step to take per iteration.

# Lets standardize and call our inputs X and outputs Y
X = or_input
Y = or_output
W = np.random.random((input_dim, output_dim))

for _ in range(num_epochs):
    layer0 = X
    # Forward propagation.
    # Inside the perceptron, Step 2. 
    layer1 = sigmoid(np.dot(X, W))

    # How much did we miss in the predictions?
    cost_error = cost(layer1, Y)

    # Back propagation.
    # multiply how much we missed from the gradient/slope of the cost for our prediction.
    layer1_error = cost_error * cost_derivative(cost_error)

    # multiply how much we missed by the gradient/slope of the sigmoid at the values in layer1
    layer1_delta = layer1_error * sigmoid_derivative(layer1)

    # update weights
    W +=  - learning_rate * np.dot(layer0.T, layer1_delta)

# Expected output.
print(Y.tolist())
# On the training data
print([[int(prediction > 0.5)] for prediction in layer1])

[出]:

[[0], [1], [1], [1]]
[[0], [1], [1], [1]]

顺便说一句,给定随机输入种子,即使没有 W 和梯度下降或感知器,预测仍然是正确的:

import numpy as np
np.random.seed(0)

# Lets standardize and call our inputs X and outputs Y
X = or_input
Y = or_output
W = np.random.random((input_dim, output_dim))

# On the training data
predictions = sigmoid(np.dot(X, W))
[[int(prediction > 0.5)] for prediction in predictions]

【问题讨论】:

    标签: python machine-learning backpropagation gradient-descent perceptron


    【解决方案1】:

    你几乎是正确的。在您的实现中,您将成本定义为误差的平方,这是始终为正的不幸结果。因此,如果您绘制均值(cost_error),它会在每次迭代中缓慢上升,而您的权重会缓慢下降。

    在您的特定情况下,您可以使任何权重 >0 以使其正常工作:如果您尝试使用足够的 epoch 进行实施,您的权重将变为负数,您的网络将不再工作。

    您可以删除成本函数中的正方形:

    def cost(predicted, truth):
        return (truth - predicted)
    

    现在要更新您的权重,您需要评估错误“位置”处的梯度。所以你需要的是:

    d_predicted = output_errors * sigmoid_derivative(predicted_output)
    

    接下来,我们更新权重:

    W += np.dot(X.T, d_predicted) * learning_rate
    

    带有错误显示的完整代码:

    import numpy as np
    import matplotlib.pyplot as plt
    np.random.seed(0)
    
    def sigmoid(x): # Returns values that sums to one.
        return 1 / (1 + np.exp(-x))
    
    def sigmoid_derivative(sx):
        # See https://math.stackexchange.com/a/1225116
        return sx * (1 - sx)
    
    def cost(predicted, truth):
        return (truth - predicted)
    
    or_input = np.array([[0,0], [0,1], [1,0], [1,1]])
    or_output = np.array([[0,1,1,1]]).T
    
    # Define the shape of the weight vector.
    num_data, input_dim = or_input.shape
    # Define the shape of the output vector. 
    output_dim = len(or_output.T)
    
    num_epochs = 50 # No. of times to iterate.
    learning_rate = 0.1 # How large a step to take per iteration.
    
    # Lets standardize and call our inputs X and outputs Y
    X = or_input
    Y = or_output
    W = np.random.random((input_dim, output_dim))
    
    # W = [[-1],[1]] # you can try to set bad weights to see the training process
    error_list = []
    
    for _ in range(num_epochs):
        layer0 = X
        # Forward propagation.
        layer1 = sigmoid(np.dot(X, W))
    
        # How much did we miss in the predictions?
        cost_error = cost(layer1, Y)
        error_list.append(np.mean(cost_error)) # save the loss to plot later
    
        # Back propagation.
        # eval the gradient :
        d_predicted = cost_error * sigmoid_derivative(cost_error)
    
        # update weights
        W = W + np.dot(X.T, d_predicted) * learning_rate
    
    
    # Expected output.
    print(Y.tolist())
    # On the training data
    print([[int(prediction > 0.5)] for prediction in layer1])
    
    # plot error curve : 
    plt.plot(range(num_epochs), loss_list, '+b')
    plt.xlabel('Epoch')
    plt.ylabel('mean error')
    

    我还添加了一行手动设置初始权重,看看网络是如何学习的

    【讨论】:

      猜你喜欢
      • 2012-04-18
      • 2017-07-12
      • 1970-01-01
      • 2013-03-04
      • 2015-07-11
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2020-08-22
      相关资源
      最近更新 更多