【问题标题】:Dimensionality for stacked LSTM network in TensorFlowTensorFlow 中堆叠 LSTM 网络的维度
【发布时间】:2018-07-09 21:55:52
【问题描述】:

在查看有关多维输入和堆叠 LSTM RNN 的许多类似问题时,我没有找到一个示例来说明 initial_state 占位符和下面的 rnn_tuple_state 的维度。尝试的[lstm_num_layers, 2, None, lstm_num_cells, 2] 是这些示例(http://monik.in/a-noobs-guide-to-implementing-rnn-lstm-using-tensorflow/https://medium.com/@erikhallstrm/using-the-tensorflow-multilayered-lstm-api-f6e7da7bbe40)中代码的扩展,在特征的每个时间步长的多个值的末尾添加了一个额外的维度feature_dim(这不会t 工作,但由于tensorflow.nn.dynamic_rnn 调用中的尺寸不匹配而产生ValueError)。

time_steps = 10
feature_dim = 2
label_dim = 4
lstm_num_layers = 3
lstm_num_cells = 100
dropout_rate = 0.8

# None is to allow for variable size batches
features = tensorflow.placeholder(tensorflow.float32,
                                  [None, time_steps, feature_dim])
labels = tensorflow.placeholder(tensorflow.float32, [None, label_dim])

cell = tensorflow.contrib.rnn.MultiRNNCell(
    [tensorflow.contrib.rnn.LayerNormBasicLSTMCell(
        lstm_num_cells,
        dropout_keep_prob = dropout_rate)] * lstm_num_layers,
    state_is_tuple = True)

# not sure of the dimensionality for the initial state
initial_state = tensorflow.placeholder(
    tensorflow.float32,
    [lstm_num_layers, 2, None, lstm_num_cells, feature_dim])
# which impacts these two lines as well
state_per_layer_list = tensorflow.unstack(initial_state, axis = 0)
rnn_tuple_state = tuple(
    [tensorflow.contrib.rnn.LSTMStateTuple(
        state_per_layer_list[i][0],
        state_per_layer_list[i][1]) for i in range(lstm_num_layers)])

# also not sure if expanding the feature dimensions is correct here
outputs, state = tensorflow.nn.dynamic_rnn(
    cell, tensorflow.expand_dims(features, -1),
    initial_state = rnn_tuple_state)

最有帮助的是解释以下一般情况:

  • 每个时间步都有N个值
  • 每个时间序列都有 S 个步骤
  • 每批都有B个序列
  • 每个输出都有R个值
  • 网络中有 L 个隐藏的 LSTM 层
  • 每一层都有M个节点

所以它的伪代码版本是:

# B, S, N, and R are undefined values for the purpose of this question
features = tensorflow.placeholder(tensorflow.float32, [B, S, N])
labels = tensorflow.placeholder(tensorflow.float32, [B, R])
...

如果我能完成,我一开始就不会在这里问。提前致谢。欢迎任何有关最佳实践的 cmets。

【问题讨论】:

  • 你能给我们完整的错误回溯吗?
  • @Engineero The full tr​​aceback is too long but the complete error is ValueError: Dimension 0 in both shapes must be equal, but are 1 and 2. Shapes are [1] and [2]. for 'rnn/while/rnn/multi_rnn_cell/cell_0/cell_0/layer_norm_basic_lstm_cell/concat' (op: 'ConcatV2') with input shapes: [?,2,1], [?,100,2], [] and with computed input tensors: input[2] = <1>.

标签: python tensorflow input lstm rnn


【解决方案1】:

经过多次试验和错误后,无论特征的维度如何,以下都会生成堆叠 LSTM dynamic_rnn

time_steps = 10
feature_dim = 2
label_dim = 4
lstm_num_layers = 3
lstm_num_cells = 100
dropout_rate = 0.8
learning_rate = 0.001

features = tensorflow.placeholder(
    tensorflow.float32, [None, time_steps, feature_dim])
labels = tensorflow.placeholder(
    tensorflow.float32, [None, label_dim])

cell_list = []
for _ in range(lstm_num_layers):
    cell_list.append(
        tensorflow.contrib.rnn.LayerNormBasicLSTMCell(lstm_num_cells,
                                                      dropout_keep_prob=dropout_rate))
cell = tensorflow.contrib.rnn.MultiRNNCell(cell_list, state_is_tuple=True)
initial_state = tensorflow.placeholder(
    tensorflow.float32, [lstm_num_layers, 2, None, lstm_num_cells])
state_per_layer_list = tensorflow.unstack(initial_state, axis=0)
rnn_tuple_state = tuple(
    [tensorflow.contrib.rnn.LSTMStateTuple(
        state_per_layer_list[i][0],
        state_per_layer_list[i][1]) for i in range(lstm_num_layers)])
state_series, last_state = tensorflow.nn.dynamic_rnn(
    cell=cell, inputs=features, initial_state=rnn_tuple_state)

hidden_layer_output = tensorflow.transpose(state_series, [1, 0, 2])
last_output = tensorflow.gather(hidden_layer_output, int(
    hidden_layer_output.get_shape()[0]) - 1)

weights = tensorflow.Variable(tensorflow.random_normal(
    [lstm_num_cells, int(labels.get_shape()[1])]))
biases = tensorflow.Variable(tensorflow.constant(
    0.0, shape=[labels.get_shape()[1]]))
predictions = tensorflow.matmul(last_output, weights) + biases
mean_squared_error = tensorflow.reduce_mean(
    tensorflow.square(predictions - labels))
minimize_error = tensorflow.train.RMSPropOptimizer(
    learning_rate).minimize(mean_squared_error)

在许多众所周知的兔子洞之一中开始这段旅程的部分原因是先前引用的示例重新塑造了输出以适应分类器而不是回归器(这是我试图构建的)。由于这与特征维度无关,因此它用作此用例的通用模板。

【讨论】:

    猜你喜欢
    • 2021-01-09
    • 2019-08-18
    • 1970-01-01
    • 2023-03-17
    • 1970-01-01
    • 2016-10-05
    • 2019-09-16
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多