【问题标题】:XOR Neural Net converges to 0.5XOR 神经网络收敛到 0.5
【发布时间】:2018-11-04 09:56:49
【问题描述】:

尽管基于this example 验证了我的网络,但我似乎无法找到我的神经网络有什么问题,这表明我的反向传播和前向传播工作正常。然而,在 XOR 训练之后,无论输入如何,我的网络都会返回大约 0.5 的输出。换句话说,网络似乎在尽可能地最小化误差,而没有看到输入和输出之间的任何相关性。由于反向传播的单次迭代似乎工作正常,我的直觉认为问题出在随后的迭代中。但是,没有任何明显的问题会导致这种情况,让我很困惑。

我查看了出现类似问题的其他线程,但似乎大多数情况下,他们的错误要么与他们设置网络的方式极其相关,要么他们的参数(如学习率或 epochs)真的不正确.有没有人熟悉这样的案例?

public class Net
{
int[] sizes;
double LEARNING_RATE;

double[][][] weights;
double[][] bias;

Random rand = new Random();  //53489085

public Net(int[] sizes_, double LEARNING_RATE_)
{
    LEARNING_RATE = LEARNING_RATE_;
    sizes = sizes_;

    int numInputs = sizes[0];
    double range = 1.0 / Math.sqrt(numInputs);

    bias = new double[sizes.length - 1][];
    weights = new double[sizes.length - 1][][];

    for(int w_layer = 0; w_layer < weights.length; w_layer++)
    {
        bias[w_layer] = new double[sizes[w_layer+1]];
        weights[w_layer] = new double[sizes[w_layer+1]][sizes[w_layer]];
        for(int j = 0; j < weights[w_layer].length; j++)
        {
            bias[w_layer][j] = 2*range*rand.nextDouble() - range;
            for(int i = 0; i < weights[w_layer][0].length; i++)
            {
                weights[w_layer][j][i] = 2*range*rand.nextDouble() - range;
            }
        }
    }
}

public double[] evaluate(double[] image_vector)
{
    return forwardPass(image_vector)[sizes.length-1];
}

public double totalError(double[][] expec, double[][] actual)
{
    double sum = 0;
    for(int i = 0; i < expec.length; i++)
    {
        sum += error(expec[i], evaluate(actual[i]));
    }
    return sum / expec.length;
}

private double error(double[] expec, double[] actual)
{
    double sum = 0;
    for(int i = 0; i < expec.length; i++)
    {
        double del = expec[i] - actual[i];
        sum += 0.5 * del * del;
    }
    return sum;
}

public void backpropagate(double[][] image_vector, double[][] outputs)
{
    double[][][] deltaWeights = new double[weights.length][][];
    double[][] deltaBias = new double[weights.length][];

    for(int w = 0; w < weights.length; w++)
    {
        deltaBias[w] = new double[bias[w].length];
        deltaWeights[w] = new double[weights[w].length][];
        for(int j = 0; j < weights[w].length; j++)
        {
            deltaWeights[w][j] = new double[weights[w][j].length];
        }
    }

    for(int batch = 0; batch < image_vector.length; batch++)
    {
        double[][] neuronVals = forwardPass(image_vector[batch]);

        /* OUTPUT DELTAS */
        int w_layer = weights.length-1;

        double[] deltas = new double[weights[w_layer].length];

        for(int j = 0; j < weights[w_layer].length; j++)
        {
            double actual = neuronVals[w_layer + 1][j]; 
            double expec = outputs[batch][j];

            double deltaErr = actual - expec;
            double deltaSig = actual * (1 - actual);

            double delta = deltaErr * deltaSig;
            deltas[j] = delta;

            deltaBias[w_layer][j] += delta;
            for(int i = 0; i < weights[w_layer][0].length; i++)
            {
                deltaWeights[w_layer][j][i] += delta * neuronVals[w_layer][i];
            }
        }

        w_layer--;
        /* REST OF THE DELTAS */
        while(w_layer >= 0)
        {   

            double[] nextDeltas = new double[weights[w_layer].length];
            for(int j = 0; j < weights[w_layer].length; j++)
            {
                double outNeur = neuronVals[w_layer+1][j];
                double deltaSig = outNeur * (1 - outNeur);

                double sum = 0;
                for(int i = 0; i < weights[w_layer+1].length; i++)
                {
                    sum += weights[w_layer+1][i][j] * deltas[i];
                }

                double delta = sum * deltaSig;
                nextDeltas[j] = delta;

                deltaBias[w_layer][j] += delta;
                for(int i = 0; i < weights[w_layer][0].length; i++)
                {
                    deltaWeights[w_layer][j][i] += delta * neuronVals[w_layer][i];
                }
            }
            deltas = nextDeltas;

            w_layer--;
        }
    }

    for(int w_layer = 0; w_layer < weights.length; w_layer++)
    {
        for(int j = 0; j < weights[w_layer].length; j++)
        {

            deltaBias[w_layer][j] /= (double) image_vector.length;

            bias[w_layer][j] -= LEARNING_RATE * deltaBias[w_layer][j];

            for(int i = 0; i < weights[w_layer][j].length; i++)
            {   
                deltaWeights[w_layer][j][i] /= (double) image_vector.length; // average of batches
                weights[w_layer][j][i] -= LEARNING_RATE * deltaWeights[w_layer][j][i];
            }
        }
    }
}

public double[][] forwardPass(double[] image_vector)
{
    double[][] outputs = new double[sizes.length][];

    double[] inputs = image_vector;

    for(int w = 0; w < weights.length; w++)
    {
        outputs[w] = inputs;

        double[] output = new double[weights[w].length];
        for(int j = 0; j < weights[w].length; j++)
        {
            output[j] = bias[w][j];
            for(int i = 0; i < weights[w][j].length; i++)
            {
                output[j] += weights[w][j][i] * inputs[i];
            }
            output[j] = sigmoid(output[j]);
        }
        inputs = output;
    }

    outputs[outputs.length-1] = inputs.clone();

    return outputs;
}

static public double sigmoid(double val)
{
    return 1.0 / (1.0 + Math.exp(-val));
}
}

我的 XOR 类看起来像这样。鉴于它很简单,错误不太可能出现在这部分,但我认为发布它不会有什么坏处,以防我对 XOR 的工作原理有一些基本的误解。我的网络设置为分批获取示例,但正如您在下面看到的这个特定示例的示例,我将一个批次发送给它,或者实际上不使用批次。

public class SingleLayer {

static int numEpochs = 10000;
static double LEARNING_RATE = 0.001;
static int[] sizes = new int[] {2, 2, 1};

public static void main(String[] args)
{

    System.out.println("Initializing randomly generate neural net...");
    Net n = new Net(sizes, LEARNING_RATE);
    System.out.println("Complete!");

    System.out.println("Loading dataset...");

    double[][] inputs = new double[4][2];
    double[][] outputs = new double[4][1];

    inputs[0] = new double[] {1, 1};
    outputs[0] = new double[] {0};

    inputs[1] = new double[] {1, 0};
    outputs[1] = new double[] {1};

    inputs[2] = new double[] {0, 1};
    outputs[2] = new double[] {1};

    inputs[3] = new double[] {0, 0};
    outputs[3] = new double[] {0};

    System.out.println("Complete!");

    System.out.println("STARTING ERROR: " + n.totalError(outputs, inputs));
    for(int epoch = 0; epoch < numEpochs; epoch++)
    {
        double[][] in = new double[1][2];
        double[][] out = new double[1][1];
        int num = (int)(Math.random()*inputs.length);

        in[0] = inputs[num];
        out[0] = outputs[num];

        n.backpropagate(inputs, outputs);
        System.out.println("ERROR: " + n.totalError(out, in));
    }

    System.out.println("Prediction After Training: " + n.evaluate(inputs[0])[0] + "  Expected: " + outputs[0][0]);
    System.out.println("Prediction After Training: " + n.evaluate(inputs[1])[0] + "  Expected: " + outputs[1][0]);
    System.out.println("Prediction After Training: " + n.evaluate(inputs[2])[0] + "  Expected: " + outputs[2][0]);
    System.out.println("Prediction After Training: " + n.evaluate(inputs[3])[0] + "  Expected: " + outputs[3][0]);
}
}

谁能提供一些关于可能出了什么问题的见解?我的参数定义得很好,我已经遵循了所有关于如何初始化权重和学习率等的建议。谢谢!

【问题讨论】:

  • 您是否在带有断点的调试器中运行,以确定输出收敛到 0.5 的原因?请解释您所做的调试,否则问题实际上是“这是我的代码,请为我调试”,这是题外话。另请访问help center 并阅读How to Askminimal reproducible example
  • 正如我所说,我验证了 backprop 和 forward prop 工作正常,并且 XOR 输入和输出是正确的,但除此之外我不知道要提供什么。如果您有关于如何调试的建议说不,那也非常受欢迎!我知道如何设置断点等,但这对于找出数学检查的神经网络的错误所在并不完全重要。问题是为什么网络接近一个似乎无法将输入与预期输出相关联的解决方案。谢谢
  • 嗯,它必须是两件事之一:无论输入如何,权重都使得输出为 0.5,或者 “后向传播和前向传播正常工作”是的。
  • 我想我所说的正常工作的意思是他们的数学检查出来了,除非我提供的链接也以某种方式不起作用。我超越了他在他的例子中给出的细节的权重,我的网吐出了所有与他相同的调整权重和答案。这向我表明,从数学上讲,反向传播和前向传播算法是正确的。此外,我还进行了几次手写测试,以验证 forward 道具是否正常工作。无论训练后的输出如何,输出都是 0.5,因此它正在接近解决方案,只是不是正确的解决方案。谢谢

标签: java machine-learning neural-network xor


【解决方案1】:

您只将前 3 个输入呈现给您的神经网络,因为以下行是错误的:

int num = (int)(Math.random() * 3);

改成

int num = (int)(Math.random() * inputs.length);

使用所有 4 种可能的输入。

【讨论】:

  • 谢谢,这确实是不正确的,不幸的是它没有改变任何东西。我还在 MNist 数据库上尝试了我的网络,并且得到了类似的网络结果,该网络接近了一个“解决方案”,尽可能地减少错误,而无需识别输入和预期输出之间的任何相关性
【解决方案2】:

我想通了。我没有运行足够的时代。这对我来说似乎有点傻,但this visualization 向我透露,在将错误降低到小于 0.00001 之前,网络会在大约 0.5 的答案上徘徊很长时间

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2020-03-28
    • 2018-01-31
    • 2016-05-13
    • 1970-01-01
    • 2017-07-13
    • 2012-03-03
    相关资源
    最近更新 更多