【问题标题】:Keras CNN module stops improving accuracy after about 7 epochsKeras CNN 模块在大约 7 个 epoch 后停止提高准确性
【发布时间】:2019-04-27 18:37:53
【问题描述】:

我有一个 12,311 的数据集,我使用 80% 和 20% 的拆分来验证数据。我正在使用批处理生成器应用 4 种不同的随机增强,正如我测试过的那样,它工作得很好。当我训练我的模块时,每次大约 7 个 epoch 后,准确性似乎都停止了提高。

我的模特:

def nvidiaModel():
        model = Sequential()

        model.add(Conv2D(24, (5, 5), padding="same", subsample=(2, 2), input_shape=(112, 256, 3), activation="elu"))
        model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation="elu")) # Second CNN
        model.add(Convolution2D(48, 5, 5, subsample=(2, 2), activation="elu")) # Third CNN
        model.add(Convolution2D(64, 3, 3, activation="elu")) # Fourth CNN   # No need for more stride skipping.
        model.add(Convolution2D(64, 3, 3, activation="elu")) # Fifth CNN

        model.add(Flatten())

        model.add(Dense(100, activation="elu"))
        model.add(Dense(50, activation="elu"))
        model.add(Dense(10, activation="elu"))

        model.add(Dense(3, activation="softmax"))   # Which will hold the steering angel.

        optimizer = Adam(lr=1e-5)

        model.compile(loss="mse", optimizer=optimizer, metrics=["accuracy"])

        return model

总结:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 56, 128, 24)       1824      
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 26, 62, 36)        21636     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 11, 29, 48)        43248     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 9, 27, 64)         27712     
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 7, 25, 64)         36928     
_________________________________________________________________
flatten_1 (Flatten)          (None, 11200)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 100)               1120100   
_________________________________________________________________
dense_2 (Dense)              (None, 50)                5050      
_________________________________________________________________
dense_3 (Dense)              (None, 10)                510       
_________________________________________________________________
dense_4 (Dense)              (None, 3)                 33        
=================================================================
Total params: 1,257,041
Trainable params: 1,257,041
Non-trainable params: 0

训练参数:

history = model.fit_generator(batchGenerator(X_train, y_train, 1000, 1),
                              steps_per_epoch = 25,
                              epochs = 30,
                              validation_data = batchGenerator(X_valid, y_valid, 300, 0),
                              validation_steps = 20,
                              verbose = 1,
                              shuffle = 1)

时代:

Epoch 1/30
25/25 [==============================] - 52s 2s/step - loss: 0.1709 - acc: 0.6624 - val_loss: 0.1618 - val_acc: 0.6718
Epoch 2/30
25/25 [==============================] - 48s 2s/step - loss: 0.1579 - acc: 0.6764 - val_loss: 0.1524 - val_acc: 0.6767
Epoch 3/30
25/25 [==============================] - 48s 2s/step - loss: 0.1535 - acc: 0.6686 - val_loss: 0.1444 - val_acc: 0.6737
Epoch 4/30
25/25 [==============================] - 48s 2s/step - loss: 0.1460 - acc: 0.6748 - val_loss: 0.1311 - val_acc: 0.7063
Epoch 5/30
25/25 [==============================] - 48s 2s/step - loss: 0.1366 - acc: 0.7076 - val_loss: 0.1262 - val_acc: 0.7370
Epoch 6/30
25/25 [==============================] - 48s 2s/step - loss: 0.1322 - acc: 0.7249 - val_loss: 0.1238 - val_acc: 0.7485
Epoch 7/30
25/25 [==============================] - 48s 2s/step - loss: 0.1313 - acc: 0.7294 - val_loss: 0.1238 - val_acc: 0.7508
Epoch 8/30
25/25 [==============================] - 48s 2s/step - loss: 0.1276 - acc: 0.7370 - val_loss: 0.1173 - val_acc: 0.7538
Epoch 9/30
25/25 [==============================] - 48s 2s/step - loss: 0.1275 - acc: 0.7380 - val_loss: 0.1181 - val_acc: 0.7513
Epoch 10/30
25/25 [==============================] - 50s 2s/step - loss: 0.1260 - acc: 0.7414 - val_loss: 0.1177 - val_acc: 0.7537
Epoch 11/30
25/25 [==============================] - 48s 2s/step - loss: 0.1256 - acc: 0.7430 - val_loss: 0.1159 - val_acc: 0.7553
Epoch 12/30
25/25 [==============================] - 49s 2s/step - loss: 0.1245 - acc: 0.7453 - val_loss: 0.1185 - val_acc: 0.7578
Epoch 13/30
25/25 [==============================] - 49s 2s/step - loss: 0.1232 - acc: 0.7491 - val_loss: 0.1183 - val_acc: 0.7582
Epoch 14/30
25/25 [==============================] - 48s 2s/step - loss: 0.1224 - acc: 0.7501 - val_loss: 0.1219 - val_acc: 0.7423
Epoch 15/30
25/25 [==============================] - 48s 2s/step - loss: 0.1222 - acc: 0.7510 - val_loss: 0.1162 - val_acc: 0.7582
Epoch 16/30
25/25 [==============================] - 49s 2s/step - loss: 0.1218 - acc: 0.7487 - val_loss: 0.1165 - val_acc: 0.7587
Epoch 17/30
25/25 [==============================] - 48s 2s/step - loss: 0.1234 - acc: 0.7454 - val_loss: 0.1185 - val_acc: 0.7442
Epoch 18/30
25/25 [==============================] - 49s 2s/step - loss: 0.1208 - acc: 0.7539 - val_loss: 0.1159 - val_acc: 0.7572
Epoch 19/30
25/25 [==============================] - 49s 2s/step - loss: 0.1215 - acc: 0.7509 - val_loss: 0.1165 - val_acc: 0.7543
Epoch 20/30
25/25 [==============================] - 49s 2s/step - loss: 0.1216 - acc: 0.7507 - val_loss: 0.1171 - val_acc: 0.7590
Epoch 21/30
25/25 [==============================] - 48s 2s/step - loss: 0.1217 - acc: 0.7515 - val_loss: 0.1140 - val_acc: 0.7618
Epoch 22/30
25/25 [==============================] - 49s 2s/step - loss: 0.1208 - acc: 0.7496 - val_loss: 0.1170 - val_acc: 0.7565
Epoch 23/30
25/25 [==============================] - 48s 2s/step - loss: 0.1200 - acc: 0.7526 - val_loss: 0.1169 - val_acc: 0.7575
Epoch 24/30
25/25 [==============================] - 49s 2s/step - loss: 0.1209 - acc: 0.7518 - val_loss: 0.1105 - val_acc: 0.7705
Epoch 25/30
25/25 [==============================] - 48s 2s/step - loss: 0.1198 - acc: 0.7540 - val_loss: 0.1176 - val_acc: 0.7543
Epoch 26/30
25/25 [==============================] - 48s 2s/step - loss: 0.1206 - acc: 0.7528 - val_loss: 0.1127 - val_acc: 0.7608
Epoch 27/30
25/25 [==============================] - 48s 2s/step - loss: 0.1204 - acc: 0.7526 - val_loss: 0.1185 - val_acc: 0.7532

我尝试增加批量大小,但结果相同,在一定数量的 epoch 后它将停止改进。我尝试添加 dropout 层,结果相同。

有人对这里可能出现的问题有什么建议吗?

【问题讨论】:

    标签: python tensorflow keras conv-neural-network


    【解决方案1】:

    如果不了解您正在解决的问题类型和所涉及的数据集,则无法确定。例如,您可能有一个肮脏的数据集或难题,而 75% 的结果与您所能获得的一样好(尽管我认为这不太可能)。另一种可能性是您的示例中有 75% 来自单个类,而您的模型只是在学习模式(即总是在猜测该类)。

    乍一看,我会尝试不同的损失 - msesoftmax 的输出上可能会导致梯度消失。如果你在做分类,我会从(sparse_categorical_)crossentropy开始。

    【讨论】:

      【解决方案2】:

      我同意@DomJack,在不了解输入数据的情况下,很难给出正确的方向。但是,您可以尝试在第一、第二和第三个 conv 层之后添加 dropout 层和 maxpool 层。您可以检查几个学习率和几个其他优化器,例如 adagrad、动量优化器。您还可以增加过滤器的数量 16,32、64,128 和 256。

      如果您可以创建和分享 Google colab gist 或任何其他方式来重现该问题,将会很有帮助。如果您的数据是私有的,您可以使用公共数据集并演示该问题。检查几个优化器 herehere。希望这对您有所帮助。谢谢!

      【讨论】:

      • 你也可以添加 batchnorm 层。您还可以尝试使用著名的 NN 架构(取决于您的问题和数据)的迁移学习,冻结端层添加几个密集层和 softmax 层。谢谢!
      猜你喜欢
      • 1970-01-01
      • 2018-10-09
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2019-11-27
      • 1970-01-01
      • 2020-06-15
      • 2022-10-15
      相关资源
      最近更新 更多