【问题标题】:Stagnant validation accuracy during training of vgg net with 10000 images使用 10000 张图像训练 vgg 网络时验证精度停滞不前
【发布时间】:2018-02-05 09:44:46
【问题描述】:

我有 10000 张图像、5000 张患病医学图像和 5000 张健康图像, 我使用vgg16并修改最后一层如下

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 256)               6422784   
_________________________________________________________________
fc2 (Dense)                  (None, 128)               32896     
_________________________________________________________________
output (Dense)               (None, 2)                 258       
=================================================================
Total params: 21,170,626
Trainable params: 6,455,938
Non-trainable params: 14,714,688

我的代码如下

import numpy as np
import os
import time
from vgg16 import VGG16
from keras.preprocessing import image
from imagenet_utils import preprocess_input, decode_predictions
from keras.layers import Dense, Activation, Flatten
from keras.layers import merge, Input
from keras.models import Model
from keras.utils import np_utils
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split

# Loading the training data
PATH = '/mount'
# Define data path
data_path = PATH 
data_dir_list = os.listdir(data_path)

img_data_list=[]
y=0;
for dataset in data_dir_list:
    img_list=os.listdir(data_path+'/'+ dataset)
    print ('Loaded the images of dataset-'+'{}\n'.format(dataset))
    for img in img_list:
        img_path = data_path + '/'+ dataset + '/'+ img 
        img = image.load_img(img_path, target_size=(224, 224))
        x = image.img_to_array(img)
        x = np.expand_dims(x, axis=0)
        x = preprocess_input(x)
        x = x/255

        y=y+1
        print('Input image shape:', x.shape)
        print(y)
        img_data_list.append(x)
from keras.optimizers import SGD
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)

img_data = np.array(img_data_list)
#img_data = img_data.astype('float32')
print (img_data.shape)
img_data=np.rollaxis(img_data,1,0)
print (img_data.shape)
img_data=img_data[0]
print (img_data.shape)

# Define the number of classes
num_classes = 2
num_of_samples = img_data.shape[0]
labels = np.ones((num_of_samples,),dtype='int64')

labels[0:5001]=0
labels[5001:]=1

names = ['YES','NO']

# convert class labels to on-hot encoding
Y = np_utils.to_categorical(labels, num_classes)

#Shuffle the dataset
x,y = shuffle(img_data,Y, random_state=2)
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=2)

image_input = Input(shape=(224, 224, 3))

model = VGG16(input_tensor=image_input, include_top=True,weights='imagenet')

model.summary()

last_layer = model.get_layer('block5_pool').output
x= Flatten(name='flatten')(last_layer)
x = Dense(256, activation='relu', name='fc1')(x)
x = Dense(128, activation='relu', name='fc2')(x)
out = Dense(num_classes, activation='softmax', name='output')(x)
custom_vgg_model2 = Model(image_input, out)
custom_vgg_model2.summary()

# freeze all the layers except the dense layers
for layer in custom_vgg_model2.layers[:-3]:
    layer.trainable = False

custom_vgg_model2.summary()

custom_vgg_model2.compile(loss='categorical_crossentropy',optimizer=sgd,metrics=['accuracy'])

t=time.time()
#   t = now()
hist = custom_vgg_model2.fit(X_train, y_train, batch_size=128, epochs=50, verbose=1, validation_data=(X_test, y_test))
print('Training time: %s' % (t - time.time()))
(loss, accuracy) = custom_vgg_model2.evaluate(X_test, y_test, batch_size=10, verbose=1)

print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss,accuracy * 100))
model.save("vgg_10000.h5")

我发布前 5 个和后 5 个 epoch 的结果

Epoch 1/50
8000/8000 [==============================] - 154s - loss: 0.6960 - acc: 0.5354 - val_loss: 0.6777 - val_acc: 0.5745
Epoch 2/50
8000/8000 [==============================] - 134s - loss: 0.6684 - acc: 0.5899 - val_loss: 0.6866 - val_acc: 0.5490
Epoch 3/50
8000/8000 [==============================] - 134s - loss: 0.6608 - acc: 0.6040 - val_loss: 0.6625 - val_acc: 0.5925
Epoch 4/50
8000/8000 [==============================] - 134s - loss: 0.6518 - acc: 0.6115 - val_loss: 0.6668 - val_acc: 0.5810
Epoch 5/50
8000/8000 [==============================] - 134s - loss: 0.6440 - acc: 0.6280 - val_loss: 0.6990 - val_acc: 0.5580

最后 5 个

Epoch 25/50
8000/8000 [==============================] - 134s - loss: 0.5944 - acc: 0.6720 - val_loss: 0.6271 - val_acc: 0.6485
Epoch 26/50
8000/8000 [==============================] - 134s - loss: 0.5989 - acc: 0.6699 - val_loss: 0.6483 - val_acc: 0.6135
Epoch 27/50
8000/8000 [==============================] - 134s - loss: 0.5950 - acc: 0.6789 - val_loss: 0.7130 - val_acc: 0.5785
Epoch 28/50
8000/8000 [==============================] - 134s - loss: 0.5853 - acc: 0.6838 - val_loss: 0.6263 - val_acc: 0.6395

结果不是很好,我在最后两层使用 adam 优化器等调整了 128 和 128 个节点,但结果仍然没有那么令人信服。任何帮助都非常感谢。

【问题讨论】:

  • @ldavid 你能检查一下吗

标签: python-3.x tensorflow machine-learning deep-learning keras


【解决方案1】:

您可以尝试以下方法:

  • 执行分层train_test_split

    train_test_split(x, y, stratify=y, test_size=0.2, random_state=2)
    
  • 查看您的数据,看看图像中是否存在异常值。

  • 使用亚当优化器:from keras.optimizers import Adam 而不是SGD
  • 在适用的情况下尝试其他种子,即使用其他种子代替 random_state=2

    X_train, X_test, y_train, y_test = train_test_split(
                                   x, y, test_size=0.2, random_state=382938)
    
  • 试试include_top=False

    model = VGG16(input_tensor=image_input, include_top=False,weights='imagenet')
    
  • 使用(train, validation, test) 集或(cross-validation, holdout) 集来获得更可靠的性能指标。

【讨论】:

  • 密集层配置有什么问题是参数数量太少还是太高?还要微调密集层还是应该尝试微调最后一个卷积层? @MedAli
  • @sachsom 是的,您绝对应该使用要冻结的层数。由于卷积层将创建有助于区分两类问题的过滤器。如果您冻结所有这些,就好像您假设相同的 Imagenet 卷积过滤器适用于您的问题。
猜你喜欢
  • 2022-08-04
  • 1970-01-01
  • 2019-01-14
  • 1970-01-01
  • 2020-07-08
  • 2017-03-20
  • 2017-12-06
  • 2017-04-22
  • 2020-06-07
相关资源
最近更新 更多