【问题标题】:Why am getting the wrong output from my neural network?为什么我的神经网络输出错误?
【发布时间】:2019-07-12 09:28:42
【问题描述】:

我已经为神经网络编写了这段代码,但不确定为什么我得到的输出是不正确的。

我创建了一个包含两个 1x1 层或神经元的网络。输入是 1 到 0 之间的随机数,并将其设置为网络所需的输出。这些是输入(左)和接收(右)值的示例:

[0.11631148733527708] [0.52613976]

[0.19471305546308992] [0.54367643]

[0.38620499751234083] [0.58595699]

[0.507207377588539]   [0.61203927]

[0.9552623183688456]  [0.70232115]

这是我的代码:

ma​​in.py

from NeuralNetwork import NeuralNetwork
from random import random

net = NeuralNetwork((1, 1))
net.learning_rate = 0.01

while True:
    v1 = [random() for i in range(0, 1)]
    actual = v1

    net.input(v1)
    net.actual(actual)

    net.calculate()
    net.backpropagate()

    print(f"{v1} {net.output()}")

神经网络.py

import numpy as np
from math import e

def sigmoid(x):
    sig_x = 1 / (1 + e**-x)
    return sig_x

def d_sigmoid(x):
    sig_x = 1 / (1 + e**-x)
    d_sig_x = np.dot(sig_x.transpose(), (1 - sig_x))
    return d_sig_x

class NeuralNetwork():
    def __init__(self, sizes):
        self.activations = [np.zeros((size, 1)) for size in sizes]
        self.values = [np.zeros((size, 1)) for size in sizes[1:]]
        self.biases = [np.zeros((size, 1)) for size in sizes[1:]]

        self.weights = [np.zeros((sizes[i + 1], sizes[i])) for i in range(0, len(sizes) - 1)]
        self.activation_functions = [(sigmoid, d_sigmoid) for i in range(0, len(sizes) - 1)]

        self.last_layer_actual = np.zeros((sizes[-1], 1))
        self.learning_rate = 0.01

    def calculate(self):
        for i, activations in enumerate(self.activations[:-1]):
            activation_function = self.activation_functions[i][0]

            self.values[i] = np.dot(self.weights[i], activations) + self.biases[i]
            self.activations[i + 1] = activation_function(self.values[i])

    def backpropagate(self):
        current = 2 * (self.activations[-1] - self.last_layer_actual)
        last_weights = 1

        for i, weights in enumerate(self.weights[::-1]):
            d_activation_func = self.activation_functions[-i - 1][1]

            current = np.dot(last_weights, current)
            current = np.dot(current, d_activation_func(self.values[-i - 1]))

            weights_change = np.dot(current, self.activations[-i - 2].transpose())
            weights -= weights_change * self.learning_rate

            self.biases[-i - 1] -= current * self.learning_rate

            last_weights = weights.transpose()

    def input(self, network_input):
        self.activations[0] = np.array(network_input).reshape(-1, 1)

    def output(self):
        return self.activations[-1].ravel()

    def actual(self, last_layer_actual):
        self.last_layer_actual = np.array(last_layer_actual).reshape(-1, 1)

【问题讨论】:

    标签: python deep-learning recurrent-neural-network backpropagation


    【解决方案1】:

    我刚刚意识到 sigmoid 函数不是线性的。

    因此,为了使所有输出都等于输入,单个权重的期望值不能是恒定的

    这么简单

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2016-01-29
      • 2021-10-15
      • 2020-11-14
      • 2019-07-02
      • 1970-01-01
      • 2015-02-23
      • 2020-03-15
      相关资源
      最近更新 更多