【问题标题】:Backpropagation for neural network神经网络的反向传播
【发布时间】:2018-05-17 04:34:11
【问题描述】:

我正在尝试实现一个简单的神经网络。 我知道那里已经有很多可用的库,这不是重点。

我的网络只有 3 层: 一个输入层 一个隐藏层 一个输出层

输出层有 8 个神经元,每个神经元代表一个不同的类别。

我已经了解如何实现前馈算法,但我真的在为反向传播而苦苦挣扎。

这是我到目前为止的想法:

private void backPropagation(List<List<Input>> trainingData)
{
    List<Input> trainingSet = new ArrayList<Input>();
    for (int row = 0; row < trainingData.size(); row++) {           
        trainingSet = trainingData.get(row);
        //we start by getting the output of the network
        List<Double> outputs = feedFoward(trainingSet); 

        //Im using the Iris dataset, so here the desiredOutput is
        //the species where 
        // 1 : setosa
        // 2 : versicolor
        // 3 : virginica
        double desiredOutput = getDesiredOutputFromTrainingSet(trainingSet);    
        //We are getting the output neuron that fired the highest result
        //like if we have
        //Ouput layer :
        //Neuron 1 --> 0.001221513
        //Neuron 2 --> 0.990516510
        //Neuron 3 --> 0.452221000
        //so the network predicted that the trainingData correspond to (2) versicolor
        double highestOutput = Collections.max(outputs);
        //What our neuron should aim for
        double target = 0;

        List<Double> deltaOutputLayer = new ArrayList<Double>();
        List<List<Double>> newWeightsOutputLayer = new ArrayList<List<Double>>();
        for (int j = 0; j < outputs.size(); j++) {  
            double out = outputs.get(j);
            //Important to do j + 1 because the species classes start at 1 (1 : setosa, 2: versicolor, 3:virginica)
            if(out == highestOutput && (j + 1) == desiredOutput)
                target = 0.99; //1
            else
                target = 0.01; //0

            //chain rule
            double delta = (out - target) * LogisticFonction.sigmoidPrime(out);
            deltaOutputLayer.add(delta);


            //get the new weigth value from delta and neta
            List<Double> newWeights = new ArrayList<Double>();
            for (int weightIndex = 0; weightIndex < _outputLayer.get(j).get_weigths().size(); weightIndex++) {
                double gradient = delta * _outputsAfterActivationHiddenLayer.get(weightIndex);
                double newWeight = _outputLayer.get(j).get_weigths().get(weightIndex) - (_learningRate * gradient);
                newWeights.add(newWeight);
            }
            newWeightsOutputLayer.add(newWeights);  
        }

        //hidden layer
        double totalError = 0;
        for (int i = 0; i < _neuronsHiddenLayer.size(); i++) {
            for (int j = 0; j < deltaOutputLayer.size(); j++) {
                double wi = _outputLayer.get(j).get_weigths().get(i);
                double delta = deltaOutputLayer.get(j);
                double partialError = wi * delta;
                totalError += partialError;
            }

            double z = _outputsAfterActivationHiddenLayer.get(i);
            double errorNeuron = LogisticFonction.sigmoidPrime(z);

            List<Double> newWeightsHiddenLayer = new ArrayList<Double>();

            for (int k = 0; k < _neuronsHiddenLayer.get(i).get_weigths().size(); k++) {
                double in = _neuronsHiddenLayer.get(i).get_inputs().get(k);
                double gradient =  totalError * errorNeuron * in;
                double oldWeigth = _neuronsHiddenLayer.get(i).get_weigths().get(k);
                double newWeigth = oldWeigth - (_learningRate * gradient);
                _neuronsHiddenLayer.get(i).get_weigths().set(k, newWeigth);
                newWeightsHiddenLayer.add(newWeigth);
            }
        }


        //then update the weigth of the output layer with the new values.
        for (int i = 0; i < newWeightsOutputLayer.size(); i++) {
            List<Double> newWeigths = newWeightsOutputLayer.get(i);
            _outputLayer.get(i).set_weigths(newWeigths);
        }
    }   
}

我已尝试使用 Iris 数据集进行测试:https://en.wikipedia.org/wiki/Iris_flower_data_set

但我的结果非常不一致,导致我相信我的反向传播算法中存在错误。

如果有人能看出一些重大缺陷,请告诉我!

非常感谢。

【问题讨论】:

    标签: java algorithm machine-learning neural-network backpropagation


    【解决方案1】:

    在这部分代码中:

    if(out == highestOutput && (j + 1) == desiredOutput)
         target = 0.99; //1
    else
         target = 0.01; //0
    

    当条件为(out == highestOutput &amp;&amp; (j + 1) == desiredOutput)时,神经元的目标输出为0.99。这意味着当前馈输出与训练示例相同的神经元时,您只会期望神经元的输出为0.99。这是不正确的。

    这部分代码的条件应该只有(j + 1) == desiredOutput。删除out == highestOutput 条件。无论前馈是否导致该神经元,目标输出应为所需输出神经元的0.99。所以这是更正后的代码:

    if((j + 1) == desiredOutput)
         target = 0.99; //1
    else
         target = 0.01; //0
    

    【讨论】:

      猜你喜欢
      • 2015-03-03
      • 2012-02-21
      • 2011-01-05
      • 1970-01-01
      • 1970-01-01
      • 2013-04-26
      • 2016-09-19
      • 2014-03-15
      相关资源
      最近更新 更多