【发布时间】:2021-08-11 03:17:12
【问题描述】:
在尝试将 CNN (U-Net) 拟合到 Python 中的 tif 训练图像时,我遇到了很多困难。
我的数据具有以下结构:
- X
-
- 0
-
-
- [图片] (tif, 3-band, 128x128, values ∈ [0, 255])
-
- X_val
-
- 0
-
-
- [图片] (tif, 3-band, 128x128, values ∈ [0, 255])
-
- 是
-
- 0
-
-
- [图片] (tif, 1-band, 128x128, values ∈ [0, 255])
-
- y_val
-
- 0
-
-
- [图片] (tif, 1-band, 128x128, values ∈ [0, 255])
-
从这些数据开始,我定义了 ImageDataGenerators:
import tensorflow as tf
from tensorflow import keras as ks
from matplotlib import pyplot as plt
import numpy as np
bs = 10 # batch size
args_col = {"data_format" : "channels_last",
"brightness_range" : [0.5, 1.5]
}
args_aug = {"rotation_range" : 365,
"width_shift_range" : 0.05,
"height_shift_range" : 0.05,
"horizontal_flip" : True,
"vertical_flip" : True,
"fill_mode" : "constant",
"featurewise_std_normalization" : False,
"featurewise_center" : False
}
args_flow = {"color_mode" : "rgb",
"class_mode" : "sparse",
"batch_size" : bs,
"target_size" : (128, 128),
"seed" : 42
}
# train generator
X_generator = ks.preprocessing.image.ImageDataGenerator(rescale = 1.0/255.0,
**args_aug,
**args_col)
X_gen = X_generator.flow_from_directory(directory = "my/directory/X",
**args_flow)
y_generator = ks.preprocessing.image.ImageDataGenerator(**args_aug,
cval = NoDataValue)
y_gen = y_generator.flow_from_directory(directory = "my/directory/y",
**args_flow, color_mode = "grayscale")
train_generator = zip(X_gen, y_gen)
# val generator
X_val_generator = ks.preprocessing.image.ImageDataGenerator(rescale = 1.0/255.0)
X_val_gen = X_generator.flow_from_directory(directory = "my/directory/X_val"),
**args_flow)
y_val_generator = ks.preprocessing.image.ImageDataGenerator()
y_val_gen = y_generator.flow_from_directory(directory = "my/directory/y_val"),
**args_flow, color_mode = "grayscale")
val_generator = zip(X_val_gen, y_val_gen)
使用这个生成器,我可以创建成对的训练图像和相应的掩码,并将它们可视化,如下所示:
X, y = next(train_generator)
X_test = X[0][0]
y_test = y[0][0]
plt.subplot(1, 2, 1)
plt.imshow(np.array(X_test))
plt.subplot(1, 2, 2)
plt.imshow(np.array(y_test))
导致:
但是,我无法按预期训练 U-Net:
当我基于来自互联网的example(或基本上我发现的任何其他 U-Net 示例)将 U-Net 定义为 model,然后执行以下操作:
model.compile(optimizer = "adam", loss = "sparse_categorical_crossentropy", metrics = ["accuracy"])
model.fit(train_generator, epochs = 5, steps_per_epoch = 10, validation_data = val_generator)
它会因错误而失败:
ValueError: Layer model expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=float32>]
我尝试了其他损失函数和其他class_mode 参数,但它总是失败,并出现与输入数据的维度或层间传递的数据相关的一些错误。另一个例子(设置class_mode = None时:
InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [16384,1] and labels shape [49152]
我刚开始接触 CNN 和 Python,所以我不知道该进一步尝试什么或如何解决这些错误。我很确定我使用了正确的损失函数,这似乎是发生类似错误时经常出现的问题(我有多个类,因此"sparse_categorical_crossentropy")。
有什么想法可以解决这个问题并使数据符合预期的 CNN 输入(或反过来,取决于问题所在)?
注意:
我的ImageDataGenerator 输出一对图像(X 和 y),格式如下(我注意到我必须将 color_mode 设置为掩码(y)的“灰度”):
我在示例 U-Net 中使用了 keras.layers.Input(shape = (128, 128, 3)),因为 keras documentation 声明 shape = "A shape tuple (integers),不包括批量大小"。
【问题讨论】:
标签: python tensorflow keras conv-neural-network semantic-segmentation