【问题标题】:python - multilayer perceptron, backpropagation, can´t learn XORpython - 多层感知器,反向传播,无法学习 XOR
【发布时间】:2013-03-04 03:50:51
【问题描述】:

我正在尝试使用反向传播实现多层感知器,但我仍然无法教他 XOR,我也会经常遇到数学范围错误。我在书籍和谷歌中查找学习规则和错误反向传播方法,但我仍然不知道我的错误在哪里

def logsig(net):
    return 1/(1+math.exp(-net))

def perceptron(coef = 0.5, iterations = 10000):
    inputs = [[0,0],[0,1],[1,0],[1,1]]
    desiredOuts = [0,1,1,0]
    bias = -1
    [input.append(bias) for input in inputs] 
    weights_h1 = [random.random() for e in range(len(inputs[0]))]
    weights_h2 = [random.random() for e in range(len(inputs[0]))]
    weights_out = [random.random() for e in range(3)]
    for itteration in range(iterations):
        out = [] 
        for input, desiredOut in zip(inputs, desiredOuts):
              #1st hiden neuron
            net_h1 = sum(x * w for x, w in zip(input, weights_h1)) 
            aktivation_h1 = logsig(net_h1)
              #2st hiden neuron
            net_h2 = sum(x * w for x, w in zip(input, weights_h2))
            aktivation_h2 = logsig(net_h2)
              #output neuron
            input_out = [aktivation_h1, aktivation_h2, bias]
            net_out = sum(x * w for x, w in zip(input_out, weights_out))
            aktivation_out = logsig(net_out)            
              #error propagation        
            error_out = (desiredOut - aktivation_out) * aktivation_out * (1-    aktivation_out)
            error_h1 = aktivation_h1 * (1-aktivation_h1) * weights_out[0] * error_out
            error_h2 = aktivation_h2 * (1-aktivation_h2) * weights_out[1] * error_out
              #learning            
            weights_out = [w + x * coef * error_out for w, x in zip(weights_out, input_out)]
            weights_h1 = [w + x * coef * error_out for w, x in zip(weights_h1, input)]
            weights_h2 = [w + x * coef * error_out for w, x in zip(weights_h2, input)]            
            out.append(aktivation_out) 
    formatedOutput = ["%.2f" % e for e in out]
    return formatedOutput

【问题讨论】:

    标签: python neural-network backpropagation perceptron multi-layer


    【解决方案1】:

    我注意到的唯一一件事是,您将 weights_h1weights_h2 更新为 error_out 而不是 error_h1error_h2。换句话说:

    weights_h1 = [w + x * coef * error_h1 for w, x in zip(weights_h1, input)]
    weights_h2 = [w + x * coef * error_h2 for w, x in zip(weights_h2, input)] 
    

    【讨论】:

      【解决方案2】:

      数学范围错误可能来自 math.exp(-net) 计算,因为 net 是一个很大的负数。

      【讨论】:

        猜你喜欢
        • 2015-07-11
        • 2018-07-15
        • 2012-04-18
        • 2017-07-12
        • 1970-01-01
        • 1970-01-01
        • 2017-03-18
        • 2013-03-09
        • 1970-01-01
        相关资源
        最近更新 更多