【发布时间】:2017-10-03 16:41:29
【问题描述】:
谁能帮帮我?我使用 tensorflow 来训练 LSTM 网络。训练运行良好,但是当我想保存模型时,出现以下错误。
Step 1, Minibatch Loss= 0.0146, Training Accuracy= 1.000
Step 1, Minibatch Loss= 0.0129, Training Accuracy= 1.000
Optimization Finished!
Traceback (most recent call last):
File ".\lstm.py", line 169, in <module>
save_path = saver.save(sess, "modelslstm/" + str(time.strftime("%d-%m-%Y-%H-%M-%S")) + ".ckpt")
File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1314, in __exit__
self._default_graph_context_manager.__exit__(exec_type, exec_value, exec_tb)
File "C:\Python35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3815, in get_controller
if self.stack[-1] is not default:
IndexError: list index out of range
我的代码:
with tf.Session() as sess:
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# from tensorflow.examples.tutorials.mnist import input_data
# mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# a,b = mnist.train.next_batch(5)
# print(b)
# Run the initializer
sess.run(init)
saver = tf.train.Saver()
merged_summary_op = tf.summary.merge_all()
writer = tf.summary.FileWriter("trainlstm", sess.graph)
#print(str(data.train.num_examples))
for step in range(1, training_steps+1):
for batch_i in range(data.train.num_examples // batch_size):
batch_x, batch_y,name = data.train.next_batch(batch_size)
#hasil,cost = encode(batch_x[0][0],"models/25-09-2017-15-25-54.ckpt")
temp = []
for batchi in range(batch_size):
temp2 = []
for ti in range(timesteps):
hasil,cost = encode(batch_x[batchi][ti],"models/25-09-2017-15-25-54.ckpt")
hasil = np.reshape(hasil,[num_input])
temp2.append(hasil.copy())
temp.append(temp2.copy())
batch_x = temp
# Reshape data to get 28 seq of 28 elements
#batch_x = batch_x.reshape((batch_size, timesteps, num_input))
#dlib.hit_enter_to_continue()
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
f.write("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc)+"\n")
print("Optimization Finished!")
save_path = saver.save(sess, "modelslstm/" + str(time.strftime("%d-%m-%Y-%H-%M-%S")) + ".ckpt")
f.close()
我添加了 tf.reset_default_graph() 但它不起作用。 请帮我解决我的问题。 谢谢!
【问题讨论】:
-
看起来
self.stack是空的,但它正在尝试被索引。这是你的代码? -
怎么会这样?在我的另一个代码中,我使用相同的方法,但模型保存成功。 :(
-
self.stack是您代码的一部分吗?显然,当它不应该被允许时,你让它空了。为什么它发生取决于代码,如果看起来它没有在这里列出。有没有调试过? -
不。 self.stack 是 tensorflow 的核心代码。它不是我的代码的一部分。
-
那么您提供的数据可能无效。阅读您正在使用的函数的文档,以确保您没有违反任何先决条件。
标签: python machine-learning tensorflow lstm