【问题标题】:TensorFlow Keras Optimise predictionTensorFlow Keras 优化预测
【发布时间】:2019-02-26 03:37:30
【问题描述】:

我正在使用 tensorflow 和 keras 来预测手写数字。对于训练,我使用 nmist 数据集。 训练后准确率约为 98.8%。但有时在测试它在 4 和 9、7 和 3 之间的混淆时,我已经使用 opencv 优化图像输入,例如去除噪声、重新缩放、阈值等。
接下来我应该做什么来改善这个预测准确性?

我的计划是添加更多样本,并将样本图像的大小从 28x28 调整为 56x56。
这会影响准确性吗?

这是我的训练模型:

epoc=15, batch size=64

def build_model():
    model = Sequential()
    # add Convolutional layers
    model.add(Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same', input_shape=input_shape))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'))
    model.add(MaxPooling2D(pool_size=(2,2)))    
    model.add(Flatten())
    # Densely connected layers
    model.add(Dense(128, activation='relu'))

    # output layer
    model.add(Dense(10, activation='softmax'))

    # compile with adam optimizer & categorical_crossentropy loss function
    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

    return model

【问题讨论】:

  • 你训练了多少数据。?验证多少钱。?你做过交叉验证吗?
  • nmints 数据集是 60000 个训练数据和 10000 个用于验证的数据。还没有,我会尝试使用 kfold,谢谢建议
  • 我还建议使用 Keras ImageGenerator 进行扩充。这将有助于模型更好地泛化
  • 我已经用过:它,train_datagen = ImageDataGenerator(rotation_range=5, width_shift_range=0.1, height_shift_range=0.1,shear_range=0.1, zoom_range=0.2, Horizo​​ntal_flip=False, fill_mode='nearest')。参数尺寸很小,因为尺寸只有28x28,会弄乱数字
  • 在使用交叉验证和一些其他调整之后,是提高准确性还是真实笔迹,谢谢@Sreeram TP

标签: tensorflow keras


【解决方案1】:

你可以尝试添加正则化:

def conv2d_bn(x,
              units,
              kernel_size=(3, 3),
              activation='relu',
              dropout=.5):
    y = Dropout(x)
    y = Conv2D(units, kernel_size=kernel_size, use_bias=False)(y)
    y = BatchNormalization(y)
    y = Activation(activation)(y)

    return y

def build_model(..., dropout=.5):
    x = Input(shape=[...])
    y = conv2d_bn(x, 32)
    y = MaxPooling2D(y)
    ...
    y = Dropout(dropout)(y)
    y = Dense(10, activation='softmax')

    model = Model(x, y)
    model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])

    return model

您可以调整类权重以强制模型在训练期间更加关注类 3、4、7 和 9:

model.fit(..., class_weights={0: 1, 1: 1, 2:1, 3:2, 4:2, 5:1, 6:1, 7:2, 8:1, 9:2})

如果你有时间烧,你也可以尝试gridrandom-search模型超参数。行中的一些东西:

def build(conv_layers, dense_layers, dense_units, activation, dropout):
    y = x = Input(shape=[...])

    kernels = 32
    kernel_size = (2, 2)

    for i in range(conv_layers):
        y = conv2d_bn(y, kernel_size, kernels, activation, dropout)

        if i % 2 == 0:  # or 3 or 4.
            y = MaxPooling2D(y)
            kernels *= 2
            kernel_size = tuple(k+1 for k in kernel_size)

    y = GlobalAveragePooling2D()(y)

    for i in range(dense_layers):
        y = Dropout(dropout)(y)
        y = Dense(dense_units)(y)

    y = Dense(10, activation='softmax')(y)


model = KerasClassifier(build_model,
                        epochs=epochs,
                        validation_split=validation_split,
                        verbose=0,
                        ...)
params = dict(conv_layers=[2, 3, 4],
              dense_layers=[0, 1],
              activation=['relu', 'selu'],
              dropout=[.2, .3, .5],
              callbacks=[callbacks.EarlyStopping(patience=10,
                                                 restore_best_weights=True)])

grid = GridSearchCV(model, params,
                    scoring='balanced_accuracy_score',
                    verbose=2,
                    n_jobs=1)

现在,将超参数搜索与NumpyArrayIterator 结合起来有点棘手,因为后者假定我们在训练步骤之前手头有所有训练样本(和目标)。不过,它仍然是可行的:

g = ImageDataGenerator(...)
cv = StratifiedKFold(n_splits=3)
results = dict(params=[], valid_score=[])

for params in ParameterGrid(params):
    fold_scores = []

    for t, v in cv.split(train_data, train_labels):
        train = g.flow(train_data[t], train_labels[t], subset='training')
        nn_valid = g.flow(train_data[t], train_labels[t], subset='validation')
        fold_valid = g.flow(train_data[v], train_labels[v])

        nn = build_model(**params)
        nn.fit_generator(train, validation_data=nn_valid, ...)

        probabilities = nn.predict_generator(fold_valid, steps=...)
        p = np.argmax(probabilities, axis=1)

        fold_scores += [metrics.accuracy_score(valid.classes_, p)]

    results['params'] += [params]
    results['valid_score'] += [fold_scores]

best_ix = np.argmax(np.mean(results['valid_score'], axis=1))
best_params = results['params'][best_ix]

nn = build_model(**best_params)
nn.fit_generator(...)

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2020-03-05
    • 2017-04-20
    • 1970-01-01
    • 2018-01-23
    • 2019-02-09
    • 2017-07-13
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多