【发布时间】:2016-12-04 13:08:24
【问题描述】:
我正在为以下任务寻找解决方案或示例:
我有一组从不同角度拍摄的相同对象的图像。 我想用 keras 构建一个深度 CNN,它可以采用 sets 两张图像,分别对每张图像执行数据增强,并将它们一起输入到一个连接的模型中。
更详细的解释:
图像存储在 HDF5 文件中,具有以下形状:
data['Xp'] # shape=(3000, 224, 224, 3) #RGB images
data['Xs'] # shape=(3000, 224, 224, 3) #RGB images
data['Y'] # shape=(3000, 9) #categorical data.
现在,我想要一个生成器,它可以:
- 打乱数据集的索引。
- 然后拍照并
- 来自数据的类别增加来自
X1_train、X2_train的图像 - 单独将其输入到具有以下结构的网络中:
代码...
from keras.layers import Flatten, Dense, Input, Dropout, Convolution2D, MaxPooling2D, Merge
img_input = Input(shape=input_shape)
x = Convolution2D(64, 3, 3, activation='relu', border_mode='same', name='block1_conv1')(img_input)
x = Convolution2D(64, 3, 3, activation='relu', border_mode='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# ... more network definition here ....
model1 = Model(img_input, x)
model2 = Model(img_input, x)
merged = Merge([model1, model2], mode='concat')
final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(9, activation='softmax'))
我创建了以下生成器,它可以生成预期的数据来提供给
fit_generator的模特...
def aug_train_iterator(Xp, Xs, Y, database_file=database_file, is_binary=True):
from itertools import izip
from keras.preprocessing.image import ImageDataGenerator
seed = 7 #make sure that two iterators give same tomato each time...
ig = ImageDataGenerator(dim_ordering='tf', rotation_range=90,
width_shift_range=0.05,
height_shift_range=0.05,
zoom_range=0.05,
fill_mode='constant',
cval=0.0,
horizontal_flip=True,
rescale=1./255)
for batch in izip(ig.flow(Xp,Y, seed=seed), ig.flow(Xs, seed=seed)):
for i in range(len(batch[0][0])):
x1 = batch[0][0][i].reshape(1,224, 224, 3)
x2 = batch[1][i].reshape(1, 224, 224, 3)
y = batch[0][1][i].reshape(1,2)
yield ([x1, x2], y)
现在,当我尝试拟合模型时...
gen = aug_train_iterator(Xp, Xs, Y)
final_model.fit_generator(gen, 1000, 20)
它实际上运行了几张图片......然后引发了大约 15 张图片的错误:
Epoch 1/20
15/1000 [..............................] - ETA: 606s - loss: 0.7001 - acc: 0.4000
Exception in thread Thread-44:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 404, in data_generator_task
generator_output = next(generator)
File "<ipython-input-134-f128a127c7ce>", line 35, in aug_train_iterator
for batch in izip(ig.flow(Xp,Y, seed=seed), ig.flow(Xs, seed=seed)):
File "/usr/local/lib/python2.7/dist-packages/keras/preprocessing/image.py", line 495, in next
x = self.X[j]
File "/usr/lib/python2.7/dist-packages/h5py/_hl/dataset.py", line 367, in __getitem__
if self._local.astype is not None:
AttributeError: 'thread._local' object has no attribute 'astype'
有什么问题?
【问题讨论】:
-
好吧,我错过了 keras 文档中可能的解决方案......如果我将使用相同的种子和关键字参数来适应两个不同的生成器 - 它应该可以工作。我会测试它。
-
好像是h5py的问题,试试
pip install --upgrade h5py更新吧 -
好吧,使用 pickle_safe=True 似乎可以解决这个问题。不知道为什么
标签: python tensorflow deep-learning keras