【发布时间】:2023-06-07 20:38:02
【问题描述】:
我正在关注this tutorial 创建一个基于 Keras 的自动编码器,但使用的是我自己的数据。该数据集包括大约 20k 个训练图像和大约 4k 个验证图像。它们都非常相似,都显示了相同的对象。我没有修改教程中的 Keras 模型布局,只更改了输入大小,因为我使用的是 300x300 图像。所以我的模型看起来像这样:
Model: "autoencoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 300, 300, 1)] 0
_________________________________________________________________
encoder (Functional) (None, 16) 5779216
_________________________________________________________________
decoder (Functional) (None, 300, 300, 1) 6176065
=================================================================
Total params: 11,955,281
Trainable params: 11,954,897
Non-trainable params: 384
_________________________________________________________________
Model: "encoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 300, 300, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 150, 150, 32) 320
_________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 150, 150, 32) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 150, 150, 32) 128
_________________________________________________________________
conv2d_1 (Conv2D) (None, 75, 75, 64) 18496
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 75, 75, 64) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 75, 75, 64) 256
_________________________________________________________________
flatten (Flatten) (None, 360000) 0
_________________________________________________________________
dense (Dense) (None, 16) 5760016
=================================================================
Total params: 5,779,216
Trainable params: 5,779,024
Non-trainable params: 192
_________________________________________________________________
Model: "decoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 16)] 0
_________________________________________________________________
dense_1 (Dense) (None, 360000) 6120000
_________________________________________________________________
reshape (Reshape) (None, 75, 75, 64) 0
_________________________________________________________________
conv2d_transpose (Conv2DTran (None, 150, 150, 64) 36928
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 150, 150, 64) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 150, 150, 64) 256
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 300, 300, 32) 18464
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 300, 300, 32) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 300, 300, 32) 128
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 300, 300, 1) 289
_________________________________________________________________
activation (Activation) (None, 300, 300, 1) 0
=================================================================
Total params: 6,176,065
Trainable params: 6,175,873
Non-trainable params: 192
然后我像这样初始化我的模型:
IMGSIZE = 300
EPOCHS = 20
LR = 0.0001
(encoder, decoder, autoencoder) = ConvAutoencoder.build(IMGSIZE, IMGSIZE, 1)
sched = ExponentialDecay(initial_learning_rate=LR, decay_steps=EPOCHS, decay_rate=LR / EPOCHS)
autoencoder.compile(loss="mean_squared_error", optimizer=Adam(learning_rate=sched))
然后我像这样训练我的模型:
image_generator = ImageDataGenerator(rescale=1.0 / 255)
train_gen = image_generator.flow_from_directory(
os.path.join(args.images, "training"),
class_mode="input",
color_mode="grayscale",
target_size=(IMGSIZE, IMGSIZE),
batch_size=BS,
)
val_gen = image_generator.flow_from_directory(
os.path.join(args.images, "validation"),
class_mode="input",
color_mode="grayscale",
target_size=(IMGSIZE, IMGSIZE),
batch_size=BS,
)
hist = autoencoder.fit(train_gen, validation_data=val_gen, epochs=EPOCHS, batch_size=BS)
我的批量大小BS 是 32,我从初始 Adam 学习率 0.001 开始(但我也尝试了从 0.1 到 0.0001 的值)。我还尝试将潜在维度增加到 1024 之类的东西,但这也不能解决我的问题。
现在在训练期间,损失在第一个时期从大约 0.5 下降到大约 0.2 - 然后从第二个时期开始,损失保持在相同的值,例如0.1989,然后它“永远”停留在那里,无论我训练多少个 epoch 和/或我使用的初始学习率。
有什么想法可能是这里的问题吗?
【问题讨论】:
-
如果没有您的特定数据集,很难回答这个问题。
-
@gobrewers14 这是我训练集中的一张图片 - 正如所说的,所有其他图片看起来都一样,只有很小的差异:imgur.com/a/ClbMJ0H
-
你可以尝试先做 BatchNorm,然后再做 ReLU,而不是先做 ReLU,再做 BatchNorm。
-
@gobrewers14 谢谢,但这并没有解决我的问题。您的版本的损失会有所降低,但在 epoch 2 之后,它在所有 terining epoch 的其余部分再次保持相同的值。
-
@gobrewers14 非常感谢,确实是这个问题。训练损失现在随着时间的推移逐渐减少,现在看起来不错。所以我会看看我的调度程序的问题。无论如何,请随时发表您的评论作为答案,我会很乐意接受它。再次感谢!
标签: python tensorflow keras autoencoder