【问题标题】:How to prepare data for training and data for predict?如何准备训练数据和预测数据?
【发布时间】:2017-03-26 03:32:56
【问题描述】:

我是 TensorFlow 和机器学习的新手(也是 python)。 在创建图像识别程序的第一步中,我在准备数据时遇到了困惑。有人可以帮我吗? 我正在研究本教程,但数据准备是模糊的。 mnis softmax for beginner

我没想到会从这个问题中得到一个完整的程序,相反我很想听听你是否能告诉我 TensorFlow 如何在 feed_dict 上工作。现在在我看来,它是:“像 [for] 循环一样工作,通过 imageHolder,获取 2352 字节/ 1 个图像的数据并放入训练操作,在那里它基于当前模型执行预测并与来自同一索引的 labelHolder 的数据然后对模型进行更正。”所以我希望输入一组 2352 字节的数据(另一个相同大小的图像)并得到预测。我也将代码放在这里,以防我的想法是正确的并且错误来自错误的实现。


说:我有5个班级的出发数据,总共3670张图片。 当将数据加载到 feed_dict 进行训练时,我已将所有图像转换为 28x28 像素,具有 3 个通道。它导致我在 feed_dict 中的图像持有者的张量为 (3670, 2352)。之后,我设法为 feed_dict 中的标签持有者准备了一个 (3670,) 的张量。 训练代码如下所示:

for step in xrange(FLAGS.max_steps):
        feed_dict = {
            imageHolder: imageTrain,
            labelHolder: labelTrain,
        }
        _, loss_rate = sess.run([train_op, loss_op], feed_dict=feed_dict)

然后我有我的代码用上面的模型预测一个新图像:

testing_dataset = do_get_file_list(FLAGS.guess_dir)
x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
for data in testing_dataset:
    image = Image.open(data)
    image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
    image = np.array(image).reshape(IMAGE_PIXELS)
    prediction = session.run(tf.argmax(logits, 1), feed_dict={x: image})

但问题是预测行总是引发“无法提供形状值......”的错误,无论我的测试数据是什么形状 (2352,), (1, 2352) (它要求 ( 3670, 2352) 形,但没办法)


这是我用过的一些标志

IMAGE_SIZE = 28
CHANNELS = 3
IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE * CHANNELS

训练运算和损失计算:

def do_get_op_compute_loss(logits, labels):
    labels = tf.to_int64(labels)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='xentropy')
    loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
    return loss

def do_get_op_training(loss_op, training_rate):
    optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    train_op = optimizer.minimize(loss_op, global_step=global_step)
    return train_op

变量

imageHolder = tf.placeholder(tf.float32, [data_count, IMAGE_PIXELS])
labelHolder = tf.placeholder(tf.int32, [data_count])

完整程序:

import os
import math
import tensorflow as tf
from PIL import Image
import numpy as np
from six.moves import xrange

flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('max_steps', 200, 'Number of steps to run trainer.')
flags.DEFINE_integer('hidden1', 128, 'Number of units in hidden layer 1.')
flags.DEFINE_integer('hidden2', 32, 'Number of units in hidden layer 2.')
flags.DEFINE_integer('batch_size', 4, 'Batch size.  '
                     'Must divide evenly into the dataset sizes.')
flags.DEFINE_string('train_dir', 'data', 'Directory to put the training data.')
flags.DEFINE_string('save_file', '.\\data\\model.ckpt', 'Directory to put the training data.')
flags.DEFINE_string('guess_dir', 'work', 'Directory to put the testing data.')
#flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data '
#                    'for unit testing.')

IMAGE_SIZE = 28
CHANNELS = 3
IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE * CHANNELS

def do_inference(images, hidden1_units, hidden2_units, class_count):
    #HIDDEN LAYER 1
    with tf.name_scope('hidden1'):
        weights = tf.Variable(
            tf.truncated_normal([IMAGE_PIXELS, hidden1_units], stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
            name='weights')
        biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')
        hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
    #HIDDEN LAYER 2
    with tf.name_scope('hidden1'):
        weights = tf.Variable(
            tf.truncated_normal([hidden1_units, hidden2_units], stddev=1.0 / math.sqrt(float(hidden1_units))),
            name='weights')
        biases = tf.Variable(tf.zeros([hidden2_units]), name='biases')
        hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
    #LINEAR
    with tf.name_scope('softmax_linear'):
        weights = tf.Variable(
            tf.truncated_normal([hidden2_units, class_count], stddev=1.0 / math.sqrt(float(hidden2_units))),
            name='weights')
        biases = tf.Variable(tf.zeros([class_count]), name='biases')
        logits = tf.matmul(hidden2, weights) + biases
    return logits

def do_get_op_compute_loss(logits, labels):
    labels = tf.to_int64(labels)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='xentropy')
    loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
    return loss

def do_get_op_training(loss_op, training_rate):
    optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    train_op = optimizer.minimize(loss_op, global_step=global_step)
    return train_op

def do_get_op_evaluate(logits, labels):
    correct = tf.nn.in_top_k(logits, labels, 1)
    return tf.reduce_sum(tf.cast(correct, tf.int32))

def do_evaluate(session, eval_correct_op, imageset_holder, labelset_holder, train_images, train_labels):
    true_count = 0
    num_examples = FLAGS.batch_size * FLAGS.batch_size
    for step in xrange(FLAGS.batch_size):
        feed_dict = {imageset_holder: train_images, labelset_holder: train_labels,}
        true_count += session.run(eval_correct_op, feed_dict=feed_dict)
        precision = true_count / num_examples
    # print('  Num examples: %d  Num correct: %d  Precision @ 1: %0.04f' %
        # (num_examples, true_count, precision))

def do_init_param(data_count, class_count): 
    # Generate placeholder
    imageHolder = tf.placeholder(tf.float32, shape=(data_count, IMAGE_PIXELS))
    labelHolder = tf.placeholder(tf.int32, shape=(data_count))

    # Build a graph for prediction from inference model
    logits = do_inference(imageHolder, FLAGS.hidden1, FLAGS.hidden2, class_count)

    # Add loss calculating op
    loss_op = do_get_op_compute_loss(logits, labelHolder)

    # Add training op
    train_op = do_get_op_training(loss_op, FLAGS.learning_rate)

    # Add evaluate correction op
    evaluate_op = do_get_op_evaluate(logits, labelHolder)

    # Create session for op operating
    sess = tf.Session()

    # Init param
    init = tf.initialize_all_variables()
    sess.run(init)
    return sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits

def do_get_class_list():
    return [{'name': name, 'path': os.path.join(FLAGS.train_dir, name)} for name in os.listdir(FLAGS.train_dir)
            if os.path.isdir(os.path.join(FLAGS.train_dir, name))]

def do_get_file_list(folderName):
    return [os.path.join(folderName, name) for name in os.listdir(folderName)
            if (os.path.isdir(os.path.join(folderName, name)) == False)]

def do_init_data_list():
    file_list = []
    for classItem in do_get_class_list():
        for dataItem in do_get_file_list(classItem['path']):
            file_list.append({'name': classItem['name'], 'path': dataItem})

    # Renew data feeding dictionary
    imageTrainList, labelTrainList = do_seperate_data(file_list)
    imageTrain = []
    for imagePath in imageTrainList:
        image = Image.open(imagePath)
        image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
        imageTrain.append(np.array(image))

    imageCount = len(imageTrain)
    imageTrain = np.array(imageTrain)
    imageTrain = imageTrain.reshape(imageCount, IMAGE_PIXELS)

    id_list, id_map = do_generate_id_label(labelTrainList)
    labelTrain = np.array(id_list)
    return imageTrain, labelTrain, id_map

def do_init():
    imageTrain, labelTrain, id_map = do_init_data_list()
    sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits = do_init_param(len(imageTrain), len(id_map))
    return sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain, id_map, logits

def do_seperate_data(data):
    images = [item['path'] for item in data]
    labels = [item['name'] for item in data]
    return images, labels

def do_generate_id_label(label_list):
    trimmed_label_list = list(set(label_list))
    id_map = {trimmed_label_list.index(label): label for label in trimmed_label_list}
    reversed_id_map = {label: trimmed_label_list.index(label) for label in trimmed_label_list}
    id_list = [reversed_id_map.get(item) for item in label_list]
    return id_list, id_map

def do_training(sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain):
    # Training state checkpoint saver
    saver = tf.train.Saver()
    # feed_dict = {
        # imageHolder: imageTrain,
        # labelHolder: labelTrain,
    # }

    for step in xrange(FLAGS.max_steps):
        feed_dict = {
            imageHolder: imageTrain,
            labelHolder: labelTrain,
        }
        _, loss_rate = sess.run([train_op, loss_op], feed_dict=feed_dict)

        if step % 100 == 0:
            print('Step {0}: loss = {1}'.format(step, loss_rate))
        if (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
            saver.save(sess, FLAGS.save_file, global_step=step)
            print('Evaluate training data')
            do_evaluate(sess, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain)

def do_predict(session, logits):
    # xentropy
    testing_dataset = do_get_file_list(FLAGS.guess_dir)
    x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
    print('Perform predict')
    print('==================================================================================')
    # TEMPORARY CODE
    for data in testing_dataset:
        image = Image.open(data)
        image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
        image = np.array(image).reshape(IMAGE_PIXELS)
        print(image.shape)
        prediction = session.run(logits, {x: image})
        print('{0}: {1}'.format(data, prediction))

def main(_):
    # TF notice default graph
    with tf.Graph().as_default():
        sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain, id_map, logits = do_init()
        print("done init")
        do_training(sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain)
        print("done training")
        do_predict(sess, logits)

# NO IDEA
if __name__ == '__main__':
    tf.app.run()

【问题讨论】:

    标签: python tensorflow


    【解决方案1】:

    理解错误很重要,你说

    但问题是预测行总是引发错误“不能 形状的馈送值......”无论我的测试数据是什么形状 (2352,), (1, 2352) (它要求 (3670, 2352) 形状,但没办法)

    哦,是的,我的朋友,是的。它说你的形状有问题,你需要检查一下。它要求3670,为什么?

    因为您的模型接受具有形状 (data_count, IMAGE_PIXELS) 的输入,您在下面声明:

    def do_init_param(data_count, class_count): 
        # Generate placeholder
        imageHolder = tf.placeholder(tf.float32, shape=(data_count, IMAGE_PIXELS))
        labelHolder = tf.placeholder(tf.int32, shape=(data_count))
    

    这里调用了这个函数:

    sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits = do_init_param(len(imageTrain), len(id_map))
    

    len(imageTrain) 是数据集的长度,可能是 3670 张图像。

    那么你就有了你的预测功能:

    def do_predict(session, logits):
        # xentropy
        testing_dataset = do_get_file_list(FLAGS.guess_dir)
        x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
        ...
        prediction = session.run(logits, {x: image})
    

    注意x在这里没用。您正在输入图像以预测您的模型,该模型不期望该形状,它期望原始占位符形状 (3670, 2352),因为这就是您所说的。

    解决方案是将x 声明为具有非特定第一维的占位符,例如:

    imageHolder = tf.placeholder(tf.float32, shape=(None, IMAGE_PIXELS))
    

    当您预测图像的标签时,您可以拥有单个图像或多个图像(小批量),但形状必须始终为 [number_images, IMAGE_PIXELS]。

    有道理吗?

    【讨论】:

    • 您好 vega,非常感谢您的大力解释。我试图理解,但这是没有意义的,因为我们必须使用用于训练模型的相同数量的图像进行预测,或者这是在训练阶段没有过度输入数据的原因?我之前尝试过“非特定”的第一维,但似乎它只在代码和函数参数中很重要。 (在我的想象中,[number_images, IMAGE_PIXELS] 保持 - 在这种情况下 - 3670x2352 字节,但我只需要预测 1 个图像 - 2352 字节 - 其余的可以设置为 0,但为什么?TensorFlow 会忽略额外的吗?)
    猜你喜欢
    • 2018-03-29
    • 2019-09-30
    • 2015-12-16
    • 1970-01-01
    • 2016-12-11
    • 2021-03-20
    • 2020-04-05
    • 2020-03-16
    • 2017-02-20
    相关资源
    最近更新 更多