【发布时间】:2020-04-13 01:50:55
【问题描述】:
我正在尝试为我的图像数据集实现函数:batch_x, batch_y = mnist.train.next_batch(128)。
为了训练,我有1000个文件夹(文件夹的名字和它属于哪个类一样“Class1”、“Class2”、“Class3”等等)。这些文件夹中的每一个都包含属于一个类别的 500 张图像。
因此,我总共有 500,000 张图像需要训练。
对我来说,将它们分成批次(图像是随机选择的)并提供给我的推理模型的最佳方法是什么?
我没有使用 Keras,而是使用 Tensorflow 1.15。
这是我的代码:
import tensorflow as tf
tf.enable_eager_execution()
import numpy as np
import os
import pathlib
tf.__version__
AUTOTUNE = tf.data.experimental.AUTOTUNE
SHUFFLE_BUFFER_SIZE = 100
BATCH_SIZE = 128
IMG_WIDTH = 128
IMG_HEIGHT = 256
DATA_DIR = 'D:/PythonWorkspace/train'
DATA_DIR = pathlib.Path(DATA_DIR) #RD
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
return parts[-2] == CLASS_NAMES
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# resize the image to the desired size.
return tf.image.resize(img, [IMG_WIDTH, IMG_HEIGHT])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
#aataset = tf.data.Dataset.list_files(os.path.join(DATA_DIR,'*/*'))
dataset = tf.data.Dataset.list_files(str(DATA_DIR/'*/*')) #RD
for f in dataset.take(5):
print(f.numpy())
dataset = dataset.map(process_path, num_parallel_calls=AUTOTUNE)
dataset = dataset.shuffle(buffer_size=SHUFFLE_BUFFER_SIZE)
dataset = dataset.repeat()
dataset = dataset.batch(BATCH_SIZE)
输出:
'1.15.2'
b'D:\\PythonWorkspace\\train\\1403\\5T04015F015.jpg'
b'D:\\PythonWorkspace\\train\\0525\\C3T0020F097.jpg'
b'D:\\PythonWorkspace\\train\\0005\\24T0060F004.jpg'
b'D:\\PythonWorkspace\\train\\1159\\45T0008F041.jpg'
b'D:\\PythonWorkspace\\train\\0425\\C5T0021F007.jpg'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-b683dfeb641b> in <module>
32 for f in dataset.take(5):
33 print(f.numpy())
---> 34 dataset = dataset.map(process_path, num_parallel_calls=AUTOTUNE)
35 dataset = dataset.shuffle(buffer_size=SHUFFLE_BUFFER_SIZE)
36 dataset = dataset.repeat()
....
....
....
AttributeError: in converted code:
<ipython-input-2-b683dfeb641b>:24 process_path *
label = get_label(file_path)
<ipython-input-2-b683dfeb641b>:11 get_label *
parts = tf.strings.split(file_path, os.path.sep)
d:\venv\lib\site-packages\tensorflow_core\python\ops\ragged\ragged_string_ops.py:642 strings_split_v1
return ragged_result.to_sparse()
AttributeError: 'Tensor' object has no attribute 'to_sparse'
【问题讨论】:
标签: python tensorflow tensorflow-datasets training-data