【发布时间】:2018-09-04 19:52:44
【问题描述】:
我正在尝试使用一个用于序列分类的玩具问题来熟悉 tensorflow 中的循环网络。
数据:
half_len = 500
pos_ex = [1, 2, 3, 4, 5] # Positive sequence.
neg_ex = [1, 2, 3, 4, 6] # Negative sequence.
num_input = len(pos_ex)
data = np.concatenate((np.stack([pos_ex]*half_len), np.stack([neg_ex]*half_len)), axis=0)
labels = np.asarray([0, 1] * half_len + [1, 0] * half_len).reshape((2 * half_len, -1))
型号:
_, x_width = data.shape
X = tf.placeholder("float", [None, x_width])
Y = tf.placeholder("float", [None, num_classes])
weights = tf.Variable(tf.random_normal([num_input, n_hidden]))
bias = tf.Variable(tf.random_normal([n_hidden]))
def lstm_model():
from tensorflow.contrib import rnn
x = tf.split(X, num_input, 1)
rnn_cell = rnn.BasicLSTMCell(n_hidden)
outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)
return tf.matmul(outputs[-1], weights) + bias
培训:
logits = lstm_model()
prediction = tf.nn.softmax(logits)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Train...
我的训练准确率在 0.5 左右变化,这让我很困惑,因为问题很简单。
Step 1, Minibatch Loss = 82.2726, Training Accuracy = 0.453
Step 25, Minibatch Loss = 6.7920, Training Accuracy = 0.547
Step 50, Minibatch Loss = 0.8528, Training Accuracy = 0.500
Step 75, Minibatch Loss = 0.6989, Training Accuracy = 0.500
Step 100, Minibatch Loss = 0.6929, Training Accuracy = 0.516
将玩具数据更改为:
pos_ex = [1, 2, 3, 4, 5]
neg_ex = [1, 2, 3, 4, 100]
立即收敛到准确性 1. 谁能解释一下为什么这个网络在如此简单的任务上失败了?谢谢。
以上代码基于this tutorial。
【问题讨论】:
标签: python tensorflow classification lstm rnn