【问题标题】:Getting very poor accuracy on stanford_dogs dataset在 stanford_dogs 数据集上获得非常差的准确性
【发布时间】:2021-04-22 00:42:38
【问题描述】:

我正在尝试在 stanford_dogs 数据集上训练一个模型来对 120 个犬种进行分类,但我的代码表现得很奇怪。

我从http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar下载了图片数据

然后运行以下代码将品种的每个文件夹拆分为训练和测试文件夹:

dataset_dict = {}
source_path = 'C:/Users/visha/Downloads/stanford_dogs/dataset'
dir_root = os.getcwd()
dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]
for category in dataset_folders:     
    dataset_dict[category] = {'source_path': os.path.join(dir_root, source_path, category),                                  
              'train_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/train',                                                              
                                      folder_type='train',                                                              
                                      data_class=category),                                  
             'validation_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/validation',                                                                                    
                                          folder_type='validation',                                                                   
                                          data_class=category)}


dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]



for key in dataset_dict:        
    print("Splitting Category {} ...".format(key))           
    split_data(source_path=dataset_dict[key]['source_path'],
               train_path=dataset_dict[key]['train_path'],
               validation_path=dataset_dict[key]['validation_path'],
               split_size=0.7)

经过一些图像增强后,我通过网络输入图像,并在最后一层使用 sigmoid 激活和 categorical_crossentropy 损失。

import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop



model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512, activation='relu'),
    tf.keras.layers.Dense(120, activation='softmax')
])

model.compile(optimizer='rmsprop',
             loss='categorical_crossentropy',
             metrics=['accuracy'])

TRAINING_DIR = 'C:/Users/visha/Downloads/stanford_dogs/train'
train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=40,width_shift_range=0.2,height_shift_range=0.2,
                                    shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
      

train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
                                                    batch_size=10,
                                                    class_mode='categorical',
                                                    target_size=(150, 150))

VALIDATION_DIR = 'C:/Users/visha/Downloads/stanford_dogs/validation'
validation_datagen = ImageDataGenerator(rescale=1./255, rotation_range=40,width_shift_range=0.2, height_shift_range=0.2,
                                        shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
      

validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
                                                              batch_size=10,
                                                              class_mode='categorical',
                                                              target_size=(150, 150))

history = model.fit(train_generator,
                              epochs=10,
                              verbose=1,
                              validation_data=validation_generator)

但代码没有按预期工作。 10 个 epoch 后的 val_accuracy 大约是 4.756。

【问题讨论】:

    标签: python python-3.x tensorflow neural-network tensorflow2.0


    【解决方案1】:

    对于验证数据,您不应进行任何图像增强,只需重新缩放即可。在验证 flow_from_directory 中设置 shuffle=False。请注意,斯坦福狗数据集非常困难。为了达到合理的准确度,您将需要一个更复杂的模型。我建议您考虑使用 Mobilenet 模型进行迁移学习。下面的代码展示了如何做到这一点。

    base_model=tf.keras.applications.mobilenet.MobileNet( include_top=False, 
              input_shape=(150,150,3) pooling='max', weights='imagenet',dropout=.4) 
    x=base_model.output
    x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
    x = Dense(1024, activation='relu')(x)
    x=Dropout(rate=.3, seed=123)(x) 
    output=Dense(120, activation='softmax')(x)
    model=Model(inputs=base_model.input, outputs=output)
    model.compile(Adamax(lr=.001),loss='categorical_crossentropy',metrics= 
                  ['accuracy'] )
    

    我忘了提到 Mobilenet 是针对像素值在 -1 到 +1 范围内的图像进行训练的。所以在 ImageDataGenerator 中包含代码

    preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
    

    这会缩放像素,因此您不需要代码

    rescale=1./255
    

    或者设置

    rescale=1/157.5-1
    

    这将重新调整 -1 和 +1 之间的值

    【讨论】:

      猜你喜欢
      • 2016-09-28
      • 1970-01-01
      • 2017-06-26
      • 2017-02-13
      • 1970-01-01
      • 2020-03-23
      • 1970-01-01
      • 2015-06-08
      • 2018-10-27
      相关资源
      最近更新 更多