【问题标题】:TensorFlow MNIST for Experts low accuracyTensorFlow MNIST for Experts 低准确率
【发布时间】:2018-04-02 18:59:13
【问题描述】:

我已按照 tensorflow MNIST for Experts 的教程进行操作。我编写了如下代码,这是本教程的副本。但是,当我运行我的代码时,准确率只有 92%、86%、......它在我的 mac 上运行非常快,只有 1 或 2 分钟。并且随着步长的增加,准确度

step 0, training accuracy 0.08
step 100, training accuracy 0.1
step 200, training accuracy 0.16
step 300, training accuracy 0.22
step 400, training accuracy 0.1
step 500, training accuracy 0.18
step 600, training accuracy 0.26
step 700, training accuracy 0.16
step 800, training accuracy 0.24
...
step 19600, training accuracy 0.9
step 19700, training accuracy 0.82
step 19800, training accuracy 0.98
step 19900, training accuracy 0.86
test accuracy 0.9065

但是当我运行官方代码mnist_deep.py。它工作得很慢,输出是

step 0, training accuracy 0.1
step 100, training accuracy 0.84
step 200, training accuracy 0.84
step 300, training accuracy 0.9
step 400, training accuracy 0.88
step 500, training accuracy 0.92
step 600, training accuracy 0.98
step 700, training accuracy 0.96
step 800, training accuracy 0.96
step 900, training accuracy 0.96
step 1000, training accuracy 0.96
step 1100, training accuracy 0.94
step 1200, training accuracy 0.96

效果很好。我比较了我的代码和 mnist_deep.py。唯一的区别是它们使用 with。为什么我的代码效果这么差?为什么他们应该使用 with?下面是我的代码。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data

import tensorflow as tf

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')

def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')

def main(_):
    mnist = input_data.read_data_sets("/MNIST_data/", one_hot=True)

    x = tf.placeholder(tf.float32, [None, 784])
    y_ = tf.placeholder(tf.float32, [None, 10])

    x_image = tf.reshape(x, [-1, 28, 28, 1])

    W_conv1 = weight_variable([5, 5, 1, 32])
    b_conv1 = bias_variable([32])
    h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
    h_pool1 = max_pool_2x2(h_conv1)

    W_conv2 = weight_variable([5, 5, 32, 64])
    b_conv2 = bias_variable([64])
    h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
    h_pool2 = max_pool_2x2(h_conv2)

    W_fc1 = weight_variable([7 * 7 * 64, 1024])
    b_fc1 = bias_variable([1024])
    h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

    keep_prob = tf.placeholder(tf.float32)
    h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

    W_fc2 = weight_variable([1024, 10])
    b_fc2 = bias_variable([10])
    y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for i in range(20000):
            batch = mnist.train.next_batch(50)
            if i % 100 == 0:
                train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
                print('step %d, training accuracy %g' % (i, train_accuracy))
                train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

        print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

if __name__ == '__main__':
    tf.app.run(main=main)

【问题讨论】:

    标签: python tensorflow mnist


    【解决方案1】:

    您已将 train_step.run 调用放入 if i % 100 == 0: 块中。

    【讨论】:

      猜你喜欢
      • 2018-05-29
      • 2018-05-15
      • 1970-01-01
      • 2020-10-17
      • 1970-01-01
      • 2018-04-29
      • 2017-06-03
      • 2018-02-08
      • 1970-01-01
      相关资源
      最近更新 更多