【问题标题】:Optimizing self coded 2-layer Artificial Neural Network优化自编码的 2 层人工神经网络
【发布时间】:2017-06-19 12:15:47
【问题描述】:

我最近开始学习神经网络,并决定编写自己的简单 2 层 ANN 并使用 MNIST 数据集对其进行基准测试。我尝试使用批量 SGD 对其进行编程,其中批量大小由用户提供。我的代码如下:

class NeuralNetwork:
    def __init__(self, inodes, hnodes, outnodes, activation_func, learning_rate):
        self.inodes = inodes
        self.hnodes = hnodes
        self.onodes = outnodes
        self.activation_function = activation_func
        self.lr = learning_rate
        self.wih = np.random.randn(self.hnodes, self.inodes) / pow(self.inodes, 0.5)
        self.who = np.random.randn(self.onodes, self.hnodes) / pow(self.hnodes, 0.5)

    def train(self, training_data, target_labels, batch=1, l2_penalty=0, verbose=False):
        batch_size = len(training_data) / batch
        print "Starting to train........"
        for i in range(batch):
            train_data_batch = training_data[batch_size*i : batch_size*(i+1)]
            label_batch = target_labels[batch_size*i : batch_size*(i+1)]
            batch_error = self.train_batch(train_data_batch, label_batch, l2_penalty)
            if verbose:
                print "Batch : " + str(i+1) + " ; Error : " + str(batch_error)
        print "..........Finished!"

    def train_batch(self, training_data, target_labels, l2_penalty=0):
        train = np.array(training_data, ndmin=2).T
        label = np.array(target_labels, ndmin=2).T

        inputs = train # IxN
        hidden_input = np.dot(self.wih, inputs) # (HxI).(IxN) = HxN
        hidden_ouputs = self.activation_function(hidden_input) # (HxN) -> (HxN)

        final_input = np.dot(self.who, hidden_ouputs) # (OxH).(HxN) -> OxN
        final_outputs = self.activation_function(final_input) # OxN -> OxN

        final_outputs = np.exp(final_outputs) # OxN
        for f in range(len(final_outputs)):
            final_outputs[f] = final_outputs[f] / sum(final_outputs[f])

        final_error_wrt_out = label - final_outputs # OxN
        hidden_error_wrt_out = np.dot(self.who.T, final_outputs) # HxN

        final_in_wrt_out = self.activation_function(final_input, der=True) # OxN
        hidden_in_wrt_out = self.activation_function(hidden_input, der=True) # HxN

        grad_who = np.dot(final_error_wrt_out * final_in_wrt_out, hidden_ouputs.T) # (OxN).(NxH) -> OxH
        grad_wih = np.dot(hidden_error_wrt_out * hidden_in_wrt_out, inputs.T) # (HxN).(NxI) -> HxI

        self.who = self.who - self.lr * (grad_who + l2_penalty*(self.who))
        self.wih = self.wih - self.lr * (grad_wih + l2_penalty*(self.wih))

        return np.sum(final_error_wrt_out * final_error_wrt_out) / (2*len(training_data))

    def query(self, inputs):
        if len(inputs) != self.inodes:
            print "Invalid input size"
            return
        inputs = np.array(inputs)
        hidden_input = np.dot(self.wih, inputs)
        hidden_ouputs = self.activation_function(hidden_input)

        final_input = np.dot(self.who, hidden_ouputs)
        final_outputs = self.activation_function(final_input)

        final_outputs = np.exp(final_outputs)
        total = sum(final_outputs)
        probs = final_outputs / total

        return probs

我在github 上发现了一个由 Tariq Rashid 编写的类似代码,其准确率约为 95%。另一方面,我的代码只给出了 10%。

我参考了关于反向传播的各种教程多次尝试调试代码,但未能提高我的准确性。如果您能深入了解这个问题,我将不胜感激。

编辑 1: 这是 mattdeak 的回答。

我之前在 softmax 层中使用了 MSE 而不是 Negative Log Likelihood 错误,这是我的错误。按照答案,我将火车功能更改如下:

def train_batch(self, training_data, target_labels, l2_penalty=0):
    train = np.array(training_data, ndmin=2).T
    label = np.array(target_labels, ndmin=2).T

    inputs = train # IxN
    hidden_input = np.dot(self.wih, inputs) # (HxI).(IxN) = HxN
    hidden_ouputs = self.activation_function(hidden_input) # (HxN) -> (HxN)

    final_input = np.dot(self.who, hidden_ouputs) # (OxH).(HxN) -> OxN
    final_outputs = self.activation_function(final_input) # OxN -> OxN

    final_outputs = np.exp(final_outputs) # OxN
    for f in range(len(final_outputs)):
        final_outputs[f] = final_outputs[f] / sum(final_outputs[f])

    error = label - final_outputs

    final_error_wrt_out = final_outputs - 1 # OxN
    hidden_error_wrt_out = np.dot(self.who.T, -np.log(final_outputs)) # (HxO).(OxN) -> HxN

    final_in_wrt_out = self.activation_function(final_input, der=True) # OxN
    hidden_in_wrt_out = self.activation_function(hidden_input, der=True) # HxN

    grad_who = np.dot(final_error_wrt_out * final_in_wrt_out, hidden_ouputs.T) # (OxN).(NxH) -> OxH
    grad_wih = np.dot(hidden_error_wrt_out * hidden_in_wrt_out, inputs.T) # (HxN).(NxI) -> HxI

    self.who = self.who - self.lr * (grad_who + l2_penalty*(self.who))
    self.wih = self.wih - self.lr * (grad_wih + l2_penalty*(self.wih))

    return np.sum(final_error_wrt_out * final_error_wrt_out) / (2*len(training_data))

但这并没有带来任何性能提升。

【问题讨论】:

  • @mattdeak 它确实是 softmax 回归,我最终正在做 np.exp(final_outputs)/np.sum(np.exp(final_outputs))。该结果在 final_outputs = np.exp(final_outputs) 之后立即使用“for”循环存储在“probs”变量中。我发现在多行上执行此操作更容易,因为它可以帮助我更好地调试程序。

标签: neural-network backpropagation


【解决方案1】:

我认为您不会在训练步骤中通过 softmax 层进行反向传播。 如果我没记错的话,我相信 softmax 的梯度可以简单地计算为:

grad_softmax = final_outputs - 1

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2020-01-23
    • 1970-01-01
    • 1970-01-01
    • 2011-11-19
    • 1970-01-01
    • 2017-09-01
    • 2014-01-01
    相关资源
    最近更新 更多