【发布时间】:2018-04-04 16:05:52
【问题描述】:
我正在尝试学习如何使用 tf.data.TFRecordDataset(),但我对此感到困惑。我有一个 tfrecords 文件,其中包含我的图像(24K)和标签,并且我已将所有图像的大小调整为 100x100x3。
首先,我用tf.data.TFRecordDataset 加载了我的tfrecords 文件并解析数据和其他内容,正如您在我的代码中看到的那样。然后我写了一个简单的模型来学习tfrecord文件的使用,但是我在尝试运行时遇到了错误。我在互联网上搜索过,但找不到任何答案。
这是我的代码:Train.py
import tensorflow as tf
import numpy as np
import os
import glob
NUM_EPOCHS = 10
batch_size = 128
def _parse_function(example_proto):
features = {"train/image": tf.FixedLenFeature((), tf.string, default_value=""),
"train/label": tf.FixedLenFeature((), tf.int64, default_value=0)}
parsed_features = tf.parse_single_example(example_proto, features)
image = tf.decode_raw(parsed_features['train/image'], tf.float32)
label = tf.cast(parsed_features['train/label'], tf.int32)
image = tf.reshape(image, [100, 100, 3])
image = tf.reshape(image, [100*100*3])
return image, label
filename = 'train_data1.tfrecords'
dataset = tf.data.TFRecordDataset(filename)
dataset = dataset.map(_parse_function)
#dataset = dataset.repeat(NUM_EPOCHS)
dataset = dataset.batch(batch_size=batch_size)
iterator = dataset.make_initializable_iterator()
image, label = iterator.get_next()
w = tf.get_variable(name='Weights',shape= [30000,3] , initializer=tf.random_normal_initializer(0, 0.01))
b = tf.get_variable(name='Biases', shape= [1, 3],initializer=tf.zeros_initializer())
logits = tf.matmul(image, w) + b
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=label, name='Entropy'), name='loss')
optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
preds = tf.nn.softmax(logits)
correct_preds = tf.equal(tf.argmax(preds, axis=1), tf.argmax(label, axis=1))
accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(2):
sess.run(iterator.initializer)
total_loss = 0
n_batches = 0
try:
while True:
_, l = sess.run([optimizer, loss])
total_loss += l
n_batches +=1
except tf.errors.OutOfRangeError:
pass
print('Average loss epoch {0}: {1}'.format(i, total_loss/n_batches))
这是图像的输出:
<tf.Tensor 'IteratorGetNext:0' shape=(?, 30000) dtype=float32>
标签是:
<tf.Tensor 'IteratorGetNext:1' shape=(?,) dtype=int32>
这一次我得到了这个错误:
logits 和标签的大小必须相同:logits_size=[128,3] 标签大小=[1,128]。
当我使用label = tf.reshape(label,[128,1]) 将标签(我认为,我在这里做错了)重塑为 [128,1] 时,我会收到此错误:
尺寸大小必须能被 3 整除,但对于 128 'gradients/Entropy/Reshape_grad/Reshape' (op: 'Reshape') 带输入 形状:[128,1],[2],输入张量计算为部分 形状:输入[1] = [?,3]。
我正在尝试对我的 3 个类别进行分类:0 表示自行车,1 表示公共汽车,2 表示汽车。
这是我如何将图像和标签读入tfrecords 的代码。
tfrecordWriter.py
shuffle_data = True
cat_dog_train_path = './Train/*.jpg'
addrs = glob.glob(cat_dog_train_path)
labels = [0 if 'bike' in addr else 1 if 'bus' in addr else 2 for addr in addrs]
if shuffle_data:
c = list(zip(addrs, labels))
shuffle(c)
addrs, labels = zip(*c)
train_addrs = addrs[:]
train_labels = labels[:]
train_shape = []
def load_image(addr):
img = cv2.imread(addr)
img = cv2.resize(img, (100, 100), interpolation=cv2.INTER_AREA)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(np.float32)
return img
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
train_filename = 'train_data1.tfrecords'
# open the TFRecords file
writer = tf.python_io.TFRecordWriter(train_filename)
for i in range(len(train_addrs)):
print ('Train data: {}/{}'.format(i+1, len(train_addrs)))
sys.stdout.flush()
img = load_image(train_addrs[i])
label = train_labels[i]
feature = {'train/label': _int64_feature(label),
'train/image': _bytes_feature(tf.compat.as_bytes(img.tostring()))}
example = tf.train.Example(features=tf.train.Features(feature=feature))
writer.write(example.SerializeToString())
writer.close()
sys.stdout.flush()
谢谢
【问题讨论】:
标签: python python-3.x tensorflow deep-learning tensorflow-datasets