【发布时间】:2016-09-18 11:12:34
【问题描述】:
我有一个由 LSTM 单元组成的多层 RNN。我想将每一层固定到不同的 GPU。你如何在 TensorFlow 中做到这一点?
import tensorflow as tf
n_inputs = 5
n_outputs = 100
n_layers = 5
n_steps = 20
X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_outputs, state_is_tuple=True)
multi_layer_cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell]*n_layers, state_is_tuple=True)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
【问题讨论】:
标签: tensorflow gpu recurrent-neural-network lstm