【问题标题】:Outputting sequence in TensorFlow RNNTensorFlow RNN 中的输出序列
【发布时间】:2017-07-03 17:21:12
【问题描述】:

我创建了一个简单的 TensorFlow 程序,它尝试使用正文中的前 3 个字符来预测下一个字符。

单个输入可能如下所示:

np.array(['t','h','i'])

关于存在的目标

np.array(['s'])

我正在尝试扩展它以输出下一个say 4 字符,而不仅仅是下一个字符。为此,我尝试向 y 提供更长的数组

np.array(['s','','i'])

除了把y改成

y = tf.placeholder(dtype=tf.int32, shape=[None, n_steps])

但是,这会产生错误:

等级不匹配:标签等级(收到 2)应该等于 logits 等级 减 1(收到 2)。

这是完整的代码

embedding_size=40
n_neurons = 200
n_output = vocab_size
learning_rate = 0.001

with tf.Graph().as_default():
    x = tf.placeholder(dtype=tf.int32, shape=[None, n_steps])
    y = tf.placeholder(dtype=tf.int32, shape=[None])
    seq_length = tf.placeholder(tf.int32, [None])

    # Let's set up the embedding converting words to vectors
    embeddings = tf.Variable(tf.random_uniform(shape=[vocab_size, embedding_size], minval=-1, maxval=1))
    train_input = tf.nn.embedding_lookup(embeddings, x)

    basic_cell = tf.nn.rnn_cell.GRUCell(num_units=n_neurons)
    outputs, states = tf.nn.dynamic_rnn(basic_cell, train_input, sequence_length=seq_length, dtype=tf.float32)

    logits = tf.layers.dense(states, units=vocab_size, activation=None)
    predictions = tf.nn.softmax(logits)
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
        labels=y,
        logits=logits)
    loss = tf.reduce_mean(xentropy)
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
    training_op = optimizer.minimize(loss)   

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for r in range(1000):
            x_batch, y_batch, seq_length_batch = input_fn()
            feed_dict = {x: x_batch, y: y_batch, seq_length: seq_length_batch}
            _, loss_out = sess.run([training_op, loss], feed_dict=feed_dict)
            if r % 1000 == 0:
                print("loss_out", loss_out)

        sample_text = "for th"
        sample_text_ids = np.expand_dims(np.array([w_to_id[c] for c in sample_text]+[0, 0], dtype=np.int32), 0)
        prediction_out = sess.run(predictions, feed_dict={x: sample_text_ids, seq_length: np.array([len(sample_text)])})
        print("Result:", id_to_w[np.argmax(prediction_out)])    

【问题讨论】:

    标签: tensorflow neural-network recurrent-neural-network


    【解决方案1】:

    在多对多 RNN 的情况下,您应该使用tf.contrib.seq2seq.sequence_loss 来计算每个时间步的损失。您的代码应如下所示:

    ...
    logits = tf.layers.dense(states, units=vocab_size, activation=None)
    weights = tf.sequence_mask(seq_length, n_steps)
    xentropy = tf.contrib.seq2seq.sequence_loss(logits, y, weights)
    ...
    

    有关tf.contrib.seq2seq.sequence_loss 的更多详细信息,请参阅here

    【讨论】:

      最近更新 更多