【问题标题】:Keras CNN predicts same class even after augmentation即使在增强之后,Keras CNN 也能预测相同的类别
【发布时间】:2020-10-24 07:13:47
【问题描述】:

我正在尝试创建一个对 3D 大脑图像进行分类的 CNN。但是,当我运行 CNN 程序时,它总是预测同一个类,并且不确定我可以采取哪些其他方法来防止这种情况。我已经用许多似是而非的解决方案搜索了这个问题,但它们都不起作用

到目前为止,我已经尝试过:

  • 降低学习率
  • 将数据标准化为 [0, 1]
  • 更改优化器
  • 更改最后一层的激活(softmax、sigmoid),我只使用 categorical_crossentropy
  • 添加/删除丢弃层
  • 改成更简单的 CNN 模型(无济于事)
  • 平衡数据集
  • 使用自定义 3D imagedatagenerator() 添加增强数据

请注意,我使用的图像数量总共为 20 张 3D 大脑图像(每个类别 5 张),我无法增加样本量,因为根本没有足够的图像。我最近尝试了数据增强,但这似乎没有帮助。

任何帮助将不胜感激!

import os
import csv
import tensorflow as tf  # 2.0
import nibabel as nib
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from keras.models import Model
from keras.layers import Conv3D, MaxPooling3D, Dense, Dropout, Activation, Flatten 
from keras.layers import Input, concatenate
from keras import optimizers
from keras.utils import to_categorical
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from augmentedvolumetricimagegenerator.generator import customImageDataGenerator
from keras.callbacks import EarlyStopping


# Administrative items
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Where the file is located
path = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline2'
folder = os.listdir(path)

target_size = (96, 96, 96)


# creating x - converting images to array
def read_image(path, folder):
    mri = []
    for i in range(len(folder)):
        files = os.listdir(path + '\\' + folder[i])
        for j in range(len(files)):
            image = np.array(nib.load(path + '\\' + folder[i] + '\\' + files[j]).get_fdata())
            image = np.resize(image, target_size)
            image = np.expand_dims(image, axis=3)
            mri.append(image)
    return mri

# creating y - one hot encoder
def create_y():
    excel_file = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline_label.xlsx'
    excel_read = pd.read_excel(excel_file)
    excel_array = np.array(excel_read['Label'])
    label = LabelEncoder().fit_transform(excel_array)
    label = label.reshape(len(label), 1)
    onehot = OneHotEncoder(sparse=False).fit_transform(label)
    return onehot

# Splitting image train/test
x = np.asarray(read_image(path, folder))
y = np.asarray(create_y())
test_size = .2
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=test_size)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)


batch_size = 4
num_classes = 4

inputs = Input((96, 96, 96, 1))
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(inputs)
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv1)
drop1 = Dropout(0.5)(pool1)

conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(drop1)
conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv2)
drop2 = Dropout(0.5)(pool2)

conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(drop2)
conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv3)
drop3 = Dropout(0.5)(pool3)

conv4 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(drop3)
conv4 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv4)
drop4 = Dropout(0.5)(pool4)

conv5 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(drop4)
conv5 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(conv5)
pool5 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv5)
drop5 = Dropout(0.5)(pool5)

flat1 = Flatten()(drop5)
dense1 = Dense(128, activation='relu')(flat1)
dense2 = Dense(64, activation='relu')(dense1)
dense3 = Dense(32, activation='relu')(dense2)
drop6 = Dropout(0.5)(dense3)
dense4 = Dense(num_classes, activation='softmax')(drop6)

model = Model(inputs=[inputs], outputs=[dense4])

opt = optimizers.Adam(lr=1e-8, beta_1=1e-3, beta_2=1e-4, decay=2e-5)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])


train_datagen = customImageDataGenerator(rescale=1./255,
                                         #width_shift_range=0.2,
                                         #height_shift_range=0.2,
                                         #rotation_range=15,
                                         #shear_range=0.2,
                                         #zoom_range=0.2,
                                         #brightness_range=[0.2, 1.0],
                                         data_format='channels_last',
                                         horizontal_flip=True)

test_datagen = customImageDataGenerator(rescale=1./255)


training_set = train_datagen.flow(x_train, y_train, batch_size=batch_size)

testing_set = test_datagen.flow(x_test, y_test, batch_size=batch_size)


callbacks = EarlyStopping(monitor='val_loss')

model.fit_generator(training_set,
                    steps_per_epoch = 20,
                    epochs = 30,
                    validation_steps = 5,
                    callbacks = [callbacks],
                    validation_data = testing_set)

#score = model.evaluate(x_test, y_test, batch_size=batch_size)
#print(score)


y_pred = model.predict(x_test, batch_size=batch_size)
y_test = np.argmax(y_test, axis=1)
y_pred = np.argmax(y_pred, axis=1)
confusion = confusion_matrix(y_test, y_pred)
map = sns.heatmap(confusion, annot=True)
print(map)

【问题讨论】:

    标签: python keras classification conv-neural-network


    【解决方案1】:

    不确定到底发生了什么。不过我有几点意见建议。

    首先查看学习曲线,看看它是否真的适合。

    其次,您将数据集的 0.2 用于包含 5 个类别的 20 张图像的数据集。如果您最后的所有图像都具有相同的标签。您只会在该标签上进行测试。所以这可能是一个问题,除非图像没有排序。

    第三,对于少数数据,看起来您可能有很多密集参数。通常从小处着手并增加参数的数量。通过观察学习曲线,您可以看到一些提示。

    最后,遗憾的是,机器学习并不神奇,你不能指望用这么少的数据取得好的结果。

    亚历克西斯

    【讨论】:

      猜你喜欢
      • 2019-09-04
      • 1970-01-01
      • 2020-10-28
      • 2021-04-21
      • 1970-01-01
      • 2020-01-02
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多