【问题标题】:Suspiciously high accuracy for binary classification problem二进制分类问题的可疑高精度
【发布时间】:2019-10-15 18:58:52
【问题描述】:

基于层函数

def neuron_layer(X, n_neurons, name, activation_fn=None):
    with tf.name_scope(name):
        n_inputs = int(X.get_shape()[1])
        stddev = 2 / np.sqrt(n_inputs)
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
        W = tf.Variable(init, name="kernel")
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        if activation_fn is not None:
            return activation_fn(Z)
        else:
            return Z

构建了以下用于二分类问题的网络:

n_hidden1 = 100
n_hidden2 = 120
n_outputs = 1 # single value prediction
n_inputs = X_test.shape[1]

reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.float32, shape=(None), name="y")

layer1 = neuron_layer(X, n_hidden1, "layer1", activation_fn=tf.nn.relu)
layer2 = neuron_layer(layer1, n_hidden2, "layer2", activation_fn=tf.nn.relu)
prediction = neuron_layer(layer2, n_outputs, "output",activation_fn=tf.nn.sigmoid)
cost = tf.losses.log_loss(y,prediction)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init = tf.global_variables_initializer()

日常训练

learning_rate = 0.01
n_epochs = 20
batch_size = 60
num_rec = X_train.shape[0]
n_batches = int(np.ceil(num_rec / batch_size))
acc_test = 0. #  assign the result of accuracy testing to this variable

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(n_epochs):
        for batch_index in range(n_batches):
            X_batch,y_batch = random_batch(X_train,Y_train,batch_size)
            _,opt = sess.run([optimizer,cost], feed_dict={X: X_batch, y: y_batch})
            loss, acc = sess.run([cost, accuracy], feed_dict={X: X_batch,y: y_batch})
        print("epoch " + str(epoch) + ", Loss= " + \
                      "{:.6f}".format(loss) + ", Training Accuracy= " + \
                      "{:.5f}".format(acc))
        print("Optimization Finished!")
    _, acc_test = sess.run([cost, accuracy], feed_dict={X:X_test,y:Y_test})

生成以下输出:

epoch 0, Loss= -6.756775, Training Accuracy= 1.00000 优化 完成的! [。 . .] 时代 19,损失= -6.769919, Training Accuracy= 1.00000 优化完成!

在测试集acc_test上的准确率是1.0。

批次是由生成的

def random_batch(X_train, y_train, batch_size):
    np.random.seed(42)
    rnd_indices = np.random.randint(0, len(X_train), batch_size)
    X_batch = X_train[rnd_indices]
    y_batch = y_train[rnd_indices]
    return X_batch, y_batch

输入的形状是

print(X_batch.shape,y_batch.shape,X_test.shape,Y_test.shape) 
>(60, 3) (60, 1) (2500, 3) (2500, 1)

显然,训练和测试测试的准确性不可能正确。网络、训练或评估过程中的问题可能出在哪里?

【问题讨论】:

  • 你确定 X_train 和 X_test 是分开的?
  • 是的,它们是不相交的。不过,我不能在这里发布完整的数据生成过程。由于从第一个 epoch 开始训练准确度为 1,因此问题肯定出在其他地方。
  • 你输入的形状是什么?
  • print(X_batch.shape,y_batch.shape,X_test.shape,Y_test.shape) -> (60, 3) (60, 1) (2500, 3) (2500, 1)跨度>
  • 当你设置 np.random.seed(42) 时,你每次都会做相同的批次。

标签: python tensorflow deep-learning


【解决方案1】:

模型过度拟合,因此您在初始时期获得异常高的准确度,为避免过度拟合,您可以使用正则化方法或通过扩充来增加数据集。使用 ImageDataGenerator 进行扩充,它将批量提供图像进行建模。尝试将 dropout 设置为 0.2。在回调中启用提前停止,当模型性能下降时它将终止训练。尝试在早期停止时耐心地玩。

【讨论】:

    猜你喜欢
    • 2017-05-30
    • 2018-03-03
    • 2021-01-31
    • 1970-01-01
    • 1970-01-01
    • 2020-09-18
    • 2019-08-02
    • 1970-01-01
    • 2021-10-06
    相关资源
    最近更新 更多