【发布时间】:2018-07-13 16:12:17
【问题描述】:
所以在过去的几个月里,我通过 Tensorflow 和 Keras 学习了很多关于神经网络的知识,所以我想尝试为 CIFAR10 数据集制作模型(代码如下)。
但是,在训练过程中,准确率变得更好(从 1 个 epoch 后的约 35% 到 5 个 epoch 后的约 60-65%),但 val_acc 保持不变或仅增加一点。以下是打印结果:
Epoch 1/5
50000/50000 [==============================] - 454s 9ms/step - loss: 1.7761 - acc: 0.3584 - val_loss: 8.6776 - val_acc: 0.4489
Epoch 2/5
50000/50000 [==============================] - 452s 9ms/step - loss: 1.3670 - acc: 0.5131 - val_loss: 8.9749 - val_acc: 0.4365
Epoch 3/5
50000/50000 [==============================] - 451s 9ms/step - loss: 1.2089 - acc: 0.5721 - val_loss: 7.7254 - val_acc: 0.5118
Epoch 4/5
50000/50000 [==============================] - 452s 9ms/step - loss: 1.1140 - acc: 0.6080 - val_loss: 7.9587 - val_acc: 0.4997
Epoch 5/5
50000/50000 [==============================] - 452s 9ms/step - loss: 1.0306 - acc: 0.6385 - val_loss: 7.4351 - val_acc: 0.5321
10000/10000 [==============================] - 27s 3ms/step
loss: 7.435152648162842
accuracy: 0.5321
我在互联网上四处查看,我最好的猜测是我的模型过度拟合,所以我尝试删除一些层,添加更多的 dropout 层并减少过滤器的数量,但没有一个显示出任何增强。
最奇怪的是,前段时间我根据一些教程做了一个非常相似的模型,经过 8 个 epoch 后,最终准确率达到了 80%。 (虽然我丢失了那个文件)
这是我的模型的代码:
model = Sequential()
model.add(Conv2D(filters=256,
kernel_size=(3, 3),
activation='relu',
data_format='channels_last',
input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(filters=128,
kernel_size=(2, 2),
activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer=adam(),
loss=categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels,
batch_size=1000,
epochs=5,
verbose=1,
validation_data=(test_images, test_labels))
loss, accuracy = model.evaluate(test_images, test_labels)
print('loss: ', loss, '\naccuracy: ', accuracy)
train_images 和 test_images 是大小为 (50000,32,32,3) 和 (10000,32,32,3) 的 numpy arrays 和 train_labels 和 test_labels 是大小为 (50000,10) 和 (10000,10) 的 numpy arrays。
我的问题:是什么原因造成的,我能做些什么?
Maxim 回答后编辑:
我把模型改成这样了:
model = Sequential()
model.add(Conv2D(filters=64,
kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal', # better for relu based networks
input_shape=(32, 32, 3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(filters=256,
kernel_size=(3, 3),
activation='relu',
kernel_initializer='he_normal'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(10, activation='softmax'))
现在的输出是这样的:
Epoch 1/10
50000/50000 [==============================] - 326s 7ms/step - loss: 1.4916 - acc: 0.4809 - val_loss: 7.7175 - val_acc: 0.5134
Epoch 2/10
50000/50000 [==============================] - 338s 7ms/step - loss: 1.0622 - acc: 0.6265 - val_loss: 6.9945 - val_acc: 0.5588
Epoch 3/10
50000/50000 [==============================] - 326s 7ms/step - loss: 0.8957 - acc: 0.6892 - val_loss: 6.6270 - val_acc: 0.5833
Epoch 4/10
50000/50000 [==============================] - 324s 6ms/step - loss: 0.7813 - acc: 0.7271 - val_loss: 5.5790 - val_acc: 0.6474
Epoch 5/10
50000/50000 [==============================] - 327s 7ms/step - loss: 0.6690 - acc: 0.7668 - val_loss: 5.7479 - val_acc: 0.6358
Epoch 6/10
50000/50000 [==============================] - 320s 6ms/step - loss: 0.5671 - acc: 0.8031 - val_loss: 5.8720 - val_acc: 0.6302
Epoch 7/10
50000/50000 [==============================] - 328s 7ms/step - loss: 0.4865 - acc: 0.8319 - val_loss: 5.6320 - val_acc: 0.6451
Epoch 8/10
50000/50000 [==============================] - 320s 6ms/step - loss: 0.3995 - acc: 0.8611 - val_loss: 5.3879 - val_acc: 0.6615
Epoch 9/10
50000/50000 [==============================] - 320s 6ms/step - loss: 0.3337 - acc: 0.8837 - val_loss: 5.6874 - val_acc: 0.6432
Epoch 10/10
50000/50000 [==============================] - 320s 6ms/step - loss: 0.2806 - acc: 0.9033 - val_loss: 5.7424 - val_acc: 0.6399
10000/10000 [==============================] - 19s 2ms/step
loss: 5.74234927444458
accuracy: 0.6399
看来我又过拟合了,尽管我在目前得到的帮助下改变了模型……有什么解释或提示吗?
输入图像是(32,32,3) numpy 数组,归一化为(0,1)
【问题讨论】:
-
将batchsize减少到大约128或64。减少dense layer的大小可能是512或384
-
谢谢你的回答,我以为batch size只对内存有影响,但它对训练的影响似乎比我想象的要大。我现在正在使用较小的批量重新训练模型,但这可能需要很长时间。我会及时通知您!
标签: python numpy tensorflow keras conv-neural-network