【问题标题】:python tensorflow 2.0 build a simple LSTM network without using Keraspython tensorflow 2.0 不使用 Keras 搭建简单的 LSTM 网络
【发布时间】:2020-05-23 17:16:42
【问题描述】:

我正在尝试在不使用 Keras API 的情况下构建 tensorflow LSTM 网络。模型很简单:

  1. 4 个单词索引序列的输入
  2. 嵌入输入 100 个暗淡的词向量
  3. 通过 LSTM 层
  4. 输出 4 个单词序列的密集层

损失函数是序列损失。

我有以下代码:

# input
input_placeholder = tf.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Input')
labels_placeholder = tf.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Target')

# embedding
embedding = tf.get_variable('Embedding', initializer=embedding_matrix, trainable=False)
inputs = tf.nn.embedding_lookup(embedding, input_placeholder)
inputs = [tf.squeeze(x, axis=1) for x in tf.split(inputs, config.num_steps, axis=1)]

# LSTM
initial_state = tf.zeros([config.batch_size, config.hidden_size])
lstm_cell = tf.nn.rnn_cell.LSTMCell(config.hidden_size)
output, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)

# loss op
all_ones = tf.ones([config.batch_size, config.num_steps])
cross_entropy = tfa.seq2seq.sequence_loss(output, labels_placeholder, all_ones, vocab_size)
tf.add_to_collection('total_loss', cross_entropy)
loss = tf.add_n(tf.get_collection('total_loss'))

# projection (dense)
proj_U = tf.get_variable('Matrix', [config.hidden_size, vocab_size])
proj_b = tf.get_variable('Bias', [vocab_size])
outputs = [tf.matmul(o, proj_U) + proj_b for o in output]

我现在的问题是在 LSTM 部分:

# tensorflow 1.x
output, _ = tf.contrib.rnn.static_rnn(
        lstm_cell, inputs, dtype = tf.float32, 
        sequence_length = [config.num_steps]*config.batch_size)

我在将其转换为 tensorlow 2 时遇到问题。在上面的代码中,我收到以下错误:

----------------------------------- ---------------------------- TypeError Traceback(最近一次调用 最后)在 ----> 1 个输出,_ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)

TypeError: 无法解压不可迭代的 RNN 对象

【问题讨论】:

    标签: python tensorflow lstm recurrent-neural-network tensorflow2.0


    【解决方案1】:

    以下代码应该适用于 TensorFlow 2.X。

    import tensorflow as tf
    # input
    input_placeholder = tf.compat.v1.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Input')
    labels_placeholder = tf.compat.v1.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Target')
    
    # embedding
    embedding = tf.compat.v1.get_variable('Embedding', initializer=embedding_matrix, trainable=False)
    inputs = tf.nn.embedding_lookup(params=embedding, ids=input_placeholder)
    inputs = [tf.squeeze(x, axis=1) for x in tf.split(inputs, config.num_steps, axis=1)]
    
    # LSTM
    initial_state = tf.zeros([config.batch_size, config.hidden_size])
    lstm_cell = tf.compat.v1.nn.rnn_cell.LSTMCell(config.hidden_size)
    output, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)
    
    # loss op
    all_ones = tf.ones([config.batch_size, config.num_steps])
    cross_entropy = tfa.seq2seq.sequence_loss(output, labels_placeholder, all_ones, vocab_size)
    tf.compat.v1.add_to_collection('total_loss', cross_entropy)
    loss = tf.add_n(tf.compat.v1.get_collection('total_loss'))
    
    # projection (dense)
    proj_U = tf.compat.v1.get_variable('Matrix', [config.hidden_size, vocab_size])
    proj_b = tf.compat.v1.get_variable('Bias', [vocab_size])
    outputs = [tf.matmul(o, proj_U) + proj_b for o in output]
    
    # tensorflow 1.x
    output, _ = tf.compat.v1.nn.static_rnn(
            lstm_cell, inputs, dtype = tf.float32, 
            sequence_length = [config.num_steps]*config.batch_size)
    

    【讨论】:

      猜你喜欢
      • 2018-05-08
      • 2016-08-24
      • 1970-01-01
      • 2016-10-05
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多