【问题标题】:Why is the performance of my backpropagation algorithm stuck?为什么我的反向传播算法的性能卡住了?
【发布时间】:2021-04-08 07:21:10
【问题描述】:

我正在学习如何编写神经网络,目前我正在研究一种具有一个输入层、一个隐藏层和一个输出层的反向传播算法。算法正在运行,当我抛出一些测试数据时

x_train = np.array([[1., 2., -3., 10.], [0.3, -7.8, 1., 2.]])
y_train = np.array([[10, -3, 6, 1], [1, 1, 6, 1]])

进入我的算法,使用3个隐藏单元的默认值和10e-4的默认学习率,

Backprop.train(x_train, y_train, tol = 10e-1)
x_pred = Backprop.predict(x_train),

我得到了很好的结果:

Tolerances: [10e-1, 10e-2, 10e-3, 10e-4, 10e-5]
Iterations: [2678, 5255, 7106, 14270, 38895]
Mean absolute error: [0.42540, 0.14577, 0.04264, 0.01735, 0.00773]
Sum of squared errors: [1.85383, 0.21345, 0.01882, 0.00311, 0.00071].

每次误差平方和按我的预期下降一位小数。但是,当我使用这样的测试数据时

X_train = np.random.rand(20, 7)
Y_train = np.random.rand(20, 2)
Tolerances: [10e+1, 10e-0, 10e-1, 10e-2, 10e-3]
Iterations: [11, 19, 63, 80, 7931],
Mean absolute error: [0.30322, 0.25076, 0.25292, 0.24327, 0.24255],
Sum of squared errors: [4.69919, 3.43997, 3.50411, 3.38170, 3.16057],

什么都没有真正改变。我检查了我的隐藏单位、梯度和权重矩阵,它们都是不同的,而且梯度确实在缩小,就像我设置的反向传播算法一样

if ( np.sum(E_hidden**2) + np.sum(E_output**2) ) < tol: 
   learning = False,

其中 E_hidden 和 E_output 是我的梯度矩阵。我的问题是:怎么可能虽然梯度在缩小,但某些数据的指标实际上保持不变,我该怎么办?

我的反向传播看起来像这样:

class Backprop:


    def sigmoid(r):
            return (1 + np.exp(-r)) ** (-1)

    def train(x_train, y_train, hidden_units = 3, learning_rate = 10e-4, tol = 10e-3):
        # We need y_train to be 2D. There should be as many rows as there are x_train vectors
        N = x_train.shape[0]
        I = x_train.shape[1]
        J = hidden_units 
        K = y_train.shape[1] # Number of output units

            # Add the bias units to x_train
        bias = -np.ones(N).reshape(-1,1) # Make it 2D so we can stack it
            # Make the row vector a column vector for easier use when applying matrices. Afterwards, x_train.shape = (N, I+1)
        x_train = np.hstack((x_train, bias)).T # x_train.shape = (I+1, N) -> N column vectors of respective length I+1
        
            # Create our weight matrices
        W_input = np.random.rand(J, I+1) # W_input.shape = (J, I+1)
        W_hidden = np.random.rand(K, J+1) # W_hidden.shape = (K, J+1)
        m = 0
        learning = True
        while learning:

            ##### ----- Phase 1: Forward Propagation ----- #####

                # Create the total input to the hidden units
            u_hidden = W_input @ x_train # u_hidden.shape = (J, N) -> N column vectors of respective length J. For every training vector we                                            # get J hidden states
                # Create the hidden units 
           
            h = Backprop.sigmoid(u_hidden) # h.shape = (J, N)
                # Create the total input to the output units
            
            bias = -np.ones(N)
            h = np.vstack((h, bias)) # h.shape = (J+1, N)
            u_output = W_hidden @ h # u_output.shape = (K, N). For every training vector we get K output states. 
                # In the code itself the following is not necessary, because, as we remember from the above, the output activation function
                # is the identity function, but let's do it anyway for the sake of clarity
            y_pred = u_output.copy() # Now, y_pred has the same shape as y_train
            
            
            ##### ----- Phase 2: Backward Propagation ----- #####

                # We will calculate the delta terms now and begin with the delta term of the output unit
                
                # We will transpose several times now. Before, having column vectors was convenient, because matrix multiplication is 
                # more intuitive then. But now, we need to work with indices and need the right dimensions. Yes, loops are inefficient,
                # they provide much more clarity so that we can easily connect the theory above with our code. 

                # We don't need the delta_output right now, because we will update W_hidden with a loop. But we need it for the delta term 
                # of the hidden unit.
            delta_output = y_pred.T - y_train 
                # Calculate our error gradient for the output units
            E_output = np.zeros((K, J+1))
            for k in range(K):
                for j in range(J+1):
                    for n in range(N):
                        E_output[k, j] += (y_pred.T[n, k] - y_train[n, k]) * h.T[n, j] 
                # Calculate our change in W_hidden
            W_delta_output = -learning_rate * E_output
                # Update the old weights
            W_hidden = W_hidden + W_delta_output

                # Let's calculate the delta term of the hidden unit
            delta_hidden = np.zeros((N, J+1))
            for n in range(N):
                for j in range(J+1):
                    for k in range(K):
                        delta_hidden[n, j] += h.T[n, j]*(1 - h.T[n, j]) * delta_output[n, k] * W_delta_output[k, j]

                # Calculate our error gradient for the hidden units, but exclude the hidden bias unit, because W_input and the hidden bias
                # unit don't share any relation at all
            E_hidden = np.zeros((J, I+1))
            for j in range(J):
                for i in range(I+1):
                    for n in range(N):
                        E_hidden[j, i] += delta_hidden[n, j]*x_train.T[n, i]
                # Calculate our change in W_input
            W_delta_hidden = -learning_rate * E_hidden
            W_input = W_input + W_delta_hidden
            
            if ( np.sum(E_hidden**2) + np.sum(E_output**2) ) < tol: 
               learning = False
            
            m += 1 # Iteration count
            
        Backprop.weights = [W_input, W_hidden]
        Backprop.iterations = m
        Backprop.errors = [E_hidden, E_output]


 ##### ----- #####


    def predict(x):
        N = x.shape[0]
            # x1 = Backprop.weights[1][:,:-1] @ Backprop.sigmoid(Backprop.weights[0][:,:-1] @ x.T) # Trying this we see we really need to add
            #  a bias here the bias if we also train using bias

            # Add the bias units to x
        bias = -np.ones(N).reshape(-1,1) # Make it 2D so we can stack it
            # Make the row vector a column vector for easier use when applying matrices.
        x = np.hstack((x, bias)).T
        h = Backprop.weights[0] @ x
        u = Backprop.sigmoid(h) # We need to transform the data using the sigmoidal function
        h = np.vstack((u, bias.reshape(1, -1)))

        return (Backprop.weights[1] @ h).T

【问题讨论】:

  • 为什么要重新发明轮子?有很多库可以比你更有效地做到这一点(更简单、更短、更快)。 Python 是一种高级语言,主要是解释。因此,标量循环比原生的智能优化/矢量化 C/C++代码慢得多。
  • 我知道,但我很乐意这样做。它加深了我的理解。关于效率,正如我在注释代码中所写,这与效率无关。在这一点上,我的目标是清晰。这样运行就好了,我会关心效率的。
  • Numpy 矢量化操作应该为您提供更小的代码(可能更简单)和更高效的代码。你可以开始阅读 Numpy tutorial here。你应该尽量避免像瘟疫一样的 Python 循环,并将它们替换为 vectorized Numpy 调用。

标签: python performance neural-network backpropagation


【解决方案1】:

我找到了答案。如果在 Backprop.predict 中,我会写

output = (Backprop.weights[1] @ h).T
    return output

除了上述之外,一切正常。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2016-11-01
    • 1970-01-01
    • 2017-10-01
    • 1970-01-01
    • 2016-06-16
    • 2020-09-20
    • 2021-01-15
    • 1970-01-01
    相关资源
    最近更新 更多