【问题标题】:Binary Classification using Tensorflow使用 TensorFlow 进行二进制分类
【发布时间】:2017-07-10 01:09:22
【问题描述】:

我正在尝试使用为 cs20si 分配的 tensorflow 执行二进制分类。这些非常简单,但我正在学习在 tensorflow 上从头开始编写代码,以了解复杂的细节,例如设置数据管道、维护检查点等。我有用于训练和测试的代码,并且无法达到超过 12% 的准确率,而 sklearn 使用相同的模型获得了 78% 的准确率。我知道问题一定出在我的 tensorflow 代码中。数据取自here,我使用的jupyter notebook 可以在here 看到。我已经发布了变量设置、训练和测试代码。我找不到为什么损失总是在 4000 秒。

变量设置

# Step 2: create placeholders for input X (Features) and label Y (binary result)
X = tf.placeholder(tf.float32, shape=[None, 9], name="X")
Y = tf.placeholder(tf.float32, shape=[None,2], name="Y")

# Step 3: create weight and bias, initialized to 0
w = tf.Variable(tf.truncated_normal([9, 2]), name="weights")
b = tf.Variable(tf.zeros([1,2]), name="bias")

# Step 4: logistic multinomial regression / softmax
score = tf.matmul(X, w) + b

# Step 5: define loss function
entropy = tf.nn.softmax_cross_entropy_with_logits(logits=score, labels=Y, name="entropy")

regularizer = tf.nn.l2_loss(w)
loss = tf.reduce_mean(entropy + BETA * regularizer, name="loss")

# Step 6: using gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate=LEARNING_RATE).minimize(loss)

# Step 7: Prediction
Y_predicted = tf.nn.softmax(tf.matmul(X, w) + b)
correct_prediction = tf.equal(tf.argmax(Y_predicted,1), tf.argmax(Y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

培训

import glob, os
for f in glob.glob("/tmp/model.ckpt*"):
    os.remove(f)

saver = tf.train.Saver([w,b])
EPOCHS = 1000

with tf.Session() as sess:
    # Step 7: initialize the necessary variables, in this case, w and b
    sess.run(tf.global_variables_initializer())

    # Step 8: train the model
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    n_batches = int(n_train_data/BATCH_SIZE)
    for epoch in tqdm(range(EPOCHS)): # run epochs
        avg_loss = 0

        for _ in range(n_batches):
            x_batch, y_batch = sess.run([data1_feature_batch, data1_label_batch])
            # Session runs train_op to minimize loss
            feed_dict={X: x_batch, Y:y_batch}
            _, loss_batch = sess.run([optimizer, loss], feed_dict=feed_dict)
            avg_loss += loss_batch/n_batches

        if (epoch+1) % 100 == 0:
            print "avg_loss",avg_loss

    coord.request_stop()
    coord.join(threads)

    # Step 9: saving the values of w and b
    print "weights",w.eval()
    print "bias",b.eval()

    # Add ops to save and restore all the variables.
    save_path = saver.save(sess, "/tmp/logit_reg_tf_model.ckpt")

测试

# Step 10: predict
# test the model

saver = tf.train.import_meta_graph("/tmp/logit_reg_tf_model.ckpt.meta")
with tf.Session() as sess:
    # nitialize the necessary variables, in this case, w and b
    sess.run(tf.global_variables_initializer())
    # Add ops to save and restore all the variables.
    saver.restore(sess, "/tmp/logit_reg_tf_model.ckpt")
    print "weights",w.eval()
    print "bias",b.eval()

    total_correct_preds = 0
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    try:
        for i in range(20):
            x_batch, y_batch = sess.run([test_data1_feature_batch, test_data1_label_batch])
            total_correct_preds += sess.run(accuracy, feed_dict={X: x_batch, Y:y_batch})

    except tf.errors.OutOfRangeError:
        print('Done testing ...')
    coord.request_stop()
    coord.join(threads)

    print 'Accuracy {0}'.format(total_correct_preds/n_test_data)

【问题讨论】:

    标签: python tensorflow logistic-regression


    【解决方案1】:

    标准化你的输入,你可以使用 sklearn 的StandardScaler()。学习率很大,把它降低到 0.01 试试。权重正则化也很大,如果需要,删除它并稍后添加。

    【讨论】:

      猜你喜欢
      • 2017-07-25
      • 2021-04-10
      • 2017-11-07
      • 2016-05-18
      • 2017-05-30
      • 2018-01-29
      • 2021-05-09
      • 2019-08-07
      • 2017-02-10
      相关资源
      最近更新 更多