【问题标题】:2 Layer Neural Network Does not Converge2 层神经网络不收敛
【发布时间】:2018-11-10 07:05:24
【问题描述】:

背景

我是 TensorFlow 的新手,我正在尝试了解深度学习的基础知识。我从头开始编写一个两层神经网络,它在 MNIST 数据集上实现了 89% 的准确率,现在我尝试在 TensorFlow 中实现相同的网络并比较它们的性能。

问题

我不确定我是否遗漏了代码中的一些基本内容,但以下实现似乎无法更新权重,因此无法输出任何有意义的内容。

num_hidden = 100
# x -> (batch_size, 784)
x = tf.placeholder(tf.float32, [None, 784])

W1 = tf.Variable(tf.zeros((784, num_hidden)))
b1 = tf.Variable(tf.zeros((1, num_hidden)))
W2 = tf.Variable(tf.zeros((num_hidden, 10)))
b2 = tf.Variable(tf.zeros((1, 10)))
# z -> (batch_size, num_hidden)
z = tf.nn.relu(tf.matmul(x, W1) + b1)
# y -> (batch_size, 10)
y = tf.nn.softmax(tf.matmul(z, W2) + b2)

# y_ -> (batch_size, 10)
y_ =  tf.placeholder(tf.float32, [None, 10])
# y_ * tf.log(y) -> (batch_size, 10)
cross_entropy =  -tf.reduce_sum(y_ * tf.log(y+1e-10))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
# tf.argmax(y, axis=1) returns the maximum index in each row
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
for epoch in range(1000):
    # batch_xs -> (100, 784)
    # batch_ys -> (100, 10), one-hot encoded
    batch_xs, batch_ys = mnist.train.next_batch(100)
    train_data = {x: batch_xs, y_: batch_ys}
    sess.run(train_step, feed_dict=train_data)
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
W1_e, b1_e, W2_e, b2_e = W1.eval(), b1.eval(), W2.eval(), b2.eval()
sess.close()

我做了什么

我查看了许多官方文档和许多其他实现,但我感到非常困惑,因为它们可能使用不同的版本并且 API 差异很大。

谁能帮帮我,提前谢谢你。

【问题讨论】:

    标签: tensorflow deep-learning


    【解决方案1】:

    到目前为止,您所做的工作存在两个问题。首先,您已将所有权重初始化为零,这将阻止网络学习。其次,学习率太高。下面的代码让我获得了 0.9665 的准确度。为什么不将所有权重设置为零,您可以查看here

    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
    
    
    num_hidden = 100
    
    # x -> (batch_size, 784)
    x = tf.placeholder(tf.float32, [None, 784])
    label_place = tf.placeholder(tf.float32, [None, 10])
    
    
    # WONT WORK as EVERYTHING IS ZERO!
    # # Get accuracy at chance \approx 0.1
    # W1 = tf.Variable(tf.zeros((784, num_hidden)))
    # b1 = tf.Variable(tf.zeros((1, num_hidden)))
    # W2 = tf.Variable(tf.zeros((num_hidden, 10)))
    # b2 = tf.Variable(tf.zeros((1, 10)))
    
    # Will work, you will need to train a bit more than 1000 steps
    # though
    W1 = tf.Variable(tf.random_normal((784, num_hidden), 0., 0.1))
    b1 = tf.Variable(tf.zeros((1, num_hidden)))
    W2 = tf.Variable(tf.random_normal((num_hidden, 10), 0, 0.1))
    b2 = tf.Variable(tf.zeros((1, 10)))
    
    # network, we only go as far as the linear output after the hidden layer
    # so we can feed it into the tf.nn.softmax_cross_entropy_with_logits below
    # this is more numerically stable
    z = tf.nn.relu(tf.matmul(x, W1) + b1)
    logits = tf.matmul(z, W2) + b2
    
    # define our loss etc as before. however note that the learning rate is lower as
    # with a higher learning rate it wasnt really working
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=label_place, logits=logits)
    train_step = tf.train.GradientDescentOptimizer(.001).minimize(cross_entropy)
    
    # continue as before
    sess = tf.InteractiveSession()
    tf.global_variables_initializer().run()
    correct_prediction = tf.equal(tf.argmax(tf.nn.softmax(logits), 1), tf.argmax(label_place, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    for epoch in range(5000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        train_data = {x: batch_xs, label_place: batch_ys}
        sess.run(train_step, feed_dict=train_data)
    print(sess.run(accuracy, feed_dict={x: mnist.test.images, label_place: mnist.test.labels}))
    W1_e, b1_e, W2_e, b2_e = W1.eval(), b1.eval(), W2.eval(), b2.eval()
    sess.close()
    

    【讨论】:

      猜你喜欢
      • 2012-03-03
      • 2019-04-13
      • 2016-05-13
      • 2021-05-02
      • 2019-02-20
      • 2016-05-13
      • 1970-01-01
      • 2016-07-10
      • 2018-12-15
      相关资源
      最近更新 更多