【问题标题】:'IndexError:' when loading saved Tensorflow graph to continue training'IndexError:' 加载保存的 Tensorflow 图以继续训练时
【发布时间】:2018-03-04 14:06:24
【问题描述】:

总结:我有一个训练例程,它尝试重新加载保存的图表以继续训练,但当我尝试使用optimizer = tf.get_collection("optimizer")[0] 加载优化器时,却产生了一个IndexError: list index out of range。在此过程中我还遇到了其他几个错误,但最终这是让我陷入困境的错误。我终于想通了,所以我会回答我自己的问题,以防它可能对其他人有所帮助。

目标很简单:我在保存模型之前花了 6 个多小时训练模型,现在我想重新加载并训练更多。但是,无论我做什么,都会出错。

我在 Github 上找到了一个 very simple example,它只是创建了一个 saver = tf.train.Saver() 运算符,然后使用 saver.save(sess, model_path) 进行保存和 saver.restore(sess, model_path) 进行加载。当我尝试做同样的事情时,我得到At least two variables have the same name: decode/decoder/dense/kernel/Adam_1。我正在使用 Adam 优化器,所以我猜这与问题有关。我使用以下方法解决了这个问题。

我知道模型很好,因为在我的代码中(见底部)我有一个预测例程,它加载保存的模型并运行和输入,它可以工作。它使用loaded_graph = tf.Graph() 然后loader = tf.train.import_meta_graph(checkpoint + '.meta') 加上loader.restore(sess, checkpoint) 来加载模型。然后它会进行一系列loaded_graph.get_tensor_by_name('input:0') 调用。

当我尝试这种方法时(你可以看到注释代码)“两个变量”问题消失了,但现在我得到了一个 TypeError: Cannot interpret feed_dict key as Tensor: The name 'save/Const:0' refers to a Tensor which does not exist. The operation, 'save/Const', does not exist in the graph. This post 很好地解释了如何组织代码以避免ValueError: cannot add op with name <my weights variable name>/Adam as that name is already used,我已经完成了。

@mmry 解释了 here 上的 TypeError,但我不明白他在说什么,也不知道如何解决它。

我花了一整天的时间来解决问题并遇到不同的错误,但我已经没有想法了。帮助将不胜感激。

这是培训代码:

import time

# Split data to training and validation sets
train_source = source_letter_ids[batch_size:]
train_target = target_letter_ids[batch_size:]
valid_source = source_letter_ids[:batch_size]
valid_target = target_letter_ids[:batch_size]
(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,
                           source_letter_to_int['<PAD>'],
                           target_letter_to_int['<PAD>']))

if (len(source_sentences) > 10000):
    display_step = 100 # Check training loss after each of this many batches with large data
else:
    display_step = 20 # Check training loss after each of this many batches with small data

# loader = tf.train.import_meta_graph(checkpoint + '.meta')
# loaded_graph = tf.get_default_graph()

# input_data = loaded_graph.get_tensor_by_name('input:0')
# targets = loaded_graph.get_tensor_by_name('targets:0')
# lr = loaded_graph.get_tensor_by_name('learning_rate:0')
# source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
# target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
# keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

# loader = tf.train.Saver()
saver = tf.train.Saver()

with tf.Session(graph=train_graph) as sess:    
    start = time.time()
    sess.run(tf.global_variables_initializer()) 

#     loader.restore(sess, checkpoint)
#     optimizer = tf.get_collection("optimization")[0]
#     gradients = optimizer.compute_gradients(cost)
#     capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
#     train_op = optimizer.apply_gradients(capped_gradients)  

    for epoch_i in range(1, epochs+1):
        for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(
                get_batches(train_target, train_source, batch_size,
                           source_letter_to_int['<PAD>'],
                           target_letter_to_int['<PAD>'])):

            # Training step
            _, loss = sess.run(
                [train_op, cost],
                {input_data: sources_batch,
                 targets: targets_batch,
                 lr: learning_rate,
                 target_sequence_length: targets_lengths,
                 source_sequence_length: sources_lengths,
                 keep_prob: keep_probability})

            # Debug message updating us on the status of the training
            if batch_i % display_step == 0 and batch_i > 0:

                # Calculate validation cost
                validation_loss = sess.run(
                [cost],
                {input_data: valid_sources_batch,
                 targets: valid_targets_batch,
                 lr: learning_rate,
                 target_sequence_length: valid_targets_lengths,
                 source_sequence_length: valid_sources_lengths,
                 keep_prob: 1.0})

                print('Epoch {:>3}/{} Batch {:>6}/{} Inputs (000) {:>7} - Loss: {:>6.3f}  - Validation loss: {:>6.3f}'
                      .format(epoch_i, epochs, batch_i, len(train_source) // batch_size, 
                              (((epoch_i - 1) * len(train_source)) + batch_i * batch_size) // 1000, 
                              loss, validation_loss[0]))

    # Save model
    saver = tf.train.Saver()
    saver.save(sess, checkpoint)

    # Print time spent training the model
    end = time.time()
    seconds = end - start
    m, s = divmod(seconds, 60)
    h, m = divmod(m, 60)
    print('Model Trained in {}h:{}m:{}s and Saved'.format(int(h), int(m), int(s)))

这是预测代码的关键部分:

此代码有效,因此我“知道”图表已成功保存。

loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load saved model
    loader = tf.train.import_meta_graph(checkpoint + '.meta')
    loader.restore(sess, checkpoint)

    input_data = loaded_graph.get_tensor_by_name('input:0')
    logits = loaded_graph.get_tensor_by_name('predictions:0')
    source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
    target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
    keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

    #Multiply by batch_size to match the model's input parameters
    answer_logits = sess.run(logits, {input_data: [text]*batch_size, 
                                      target_sequence_length: [len(text)]*batch_size, 
                                      source_sequence_length: [len(text)]*batch_size,
                                      keep_prob: 1.0})[0] 

更新 - 再次尝试训练代码

这是训练代码的另一个破解,试图遵循@jie-zhou的建议。这次optimizer = tf.get_collection("optimization")[0] 给了我IndexError: list index out of range。该行仅在 sess.run(tf.global_variables_initializer()) 之后才有效,所以我没有看到我应该初始化的内容。

import time

# Split data to training and validation sets
train_source = source_letter_ids[batch_size:]
train_target = target_letter_ids[batch_size:]
valid_source = source_letter_ids[:batch_size]
valid_target = target_letter_ids[:batch_size]
(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,
                           source_letter_to_int['<PAD>'],
                           target_letter_to_int['<PAD>']))

if (len(source_sentences) > 10000):
    display_step = 100 # Check training loss after each of this many batches with large data
else:
    display_step = 20 # Check training loss after each of this many batches with small data

loader = tf.train.import_meta_graph(checkpoint + '.meta')
loaded_graph = tf.get_default_graph()

input_data = loaded_graph.get_tensor_by_name('input:0')
targets = loaded_graph.get_tensor_by_name('targets:0')
lr = loaded_graph.get_tensor_by_name('learning_rate:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')

with tf.Session(graph=train_graph) as sess:    
    start = time.time()
    sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()))

    loader.restore(sess, checkpoint)
    optimizer = tf.get_collection("optimization")[0]
    gradients = optimizer.compute_gradients(cost)
    capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
    train_op = optimizer.apply_gradients(capped_gradients)  

    for epoch_i in range(1, epochs+1):
        for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(
                get_batches(train_target, train_source, batch_size,
                           source_letter_to_int['<PAD>'],
                           target_letter_to_int['<PAD>'])):

            # Training step
            _, loss = sess.run(
                [train_op, cost],
                {input_data: sources_batch,
                 targets: targets_batch,
                 lr: learning_rate,
                 target_sequence_length: targets_lengths,
                 source_sequence_length: sources_lengths,
                 keep_prob: keep_probability})

            # Debug message updating us on the status of the training
            if batch_i % display_step == 0 and batch_i > 0:

                # Calculate validation cost
                validation_loss = sess.run(
                [cost],
                {input_data: valid_sources_batch,
                 targets: valid_targets_batch,
                 lr: learning_rate,
                 target_sequence_length: valid_targets_lengths,
                 source_sequence_length: valid_sources_lengths,
                 keep_prob: 1.0})

                print('Epoch {:>3}/{} Batch {:>6}/{} Inputs (000) {:>7} - Loss: {:>6.3f}  - Validation loss: {:>6.3f}'
                      .format(epoch_i, epochs, batch_i, len(train_source) // batch_size, 
                              (((epoch_i - 1) * len(train_source)) + batch_i * batch_size) // 1000, 
                              loss, validation_loss[0]))

    # Save model
    saver = tf.train.Saver()
    saver.save(sess, checkpoint)

    # Print time spent training the model
    end = time.time()
    seconds = end - start
    m, s = divmod(seconds, 60)
    h, m = divmod(m, 60)
    print('Model Trained in {}h:{}m:{}s and Saved'.format(int(h), int(m), int(s)))

更新 2 - 再次尝试训练代码

为了更密切地关注this model,我添加了代码来检查图表是否存在,并在加载现有图表时执行不同的操作。我还构建了类似于预测代码的代码,我知道它是有效的。一个重要的不同是,与预测期间不同,我需要加载优化器进行训练。

使用全新的图表运行良好,但仍无法加载现有图表。但是,我仍然在optimizer = tf.get_collection("optimization")[0] 获得IndexError: list index out of range

我已经删掉了上面的一些代码,以专注于基本内容。

# Test to see if graph already exists
if os.path.exists(checkpoint + ".meta"):
    print("Reloading existing graph to continue training.")
    brand_new = False    
    train_graph = tf.Graph()
#     saver = tf.train.import_meta_graph(checkpoint + '.meta')
#     train_graph = tf.get_default_graph()
else:
    print("Starting with new graph.")
    brand_new = True
    with train_graph.as_default():
        saver = tf.train.Saver()

with tf.Session(graph=train_graph) as sess:    
    start = time.time()
    if brand_new:
        sess.run(tf.global_variables_initializer())
    else:
#         sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())) 
        saver = tf.train.import_meta_graph(checkpoint + '.meta')
        saver.restore(sess, checkpoint) 

        # Restore variables
        input_data = train_graph.get_tensor_by_name('input:0')
        targets = train_graph.get_tensor_by_name('targets:0')
        lr = train_graph.get_tensor_by_name('learning_rate:0')
        source_sequence_length = train_graph.get_tensor_by_name('source_sequence_length:0')
        target_sequence_length = train_graph.get_tensor_by_name('target_sequence_length:0')
        keep_prob = train_graph.get_tensor_by_name('keep_prob:0')

        # Load the optimizer
        # Commenting out this block gives 'ValueError: Operation name: "optimization/Adam"'
        # Leaving it gives 'IndexError: list index out of range' at 'optimizer = tf.get_collection("optimizer")[0]'
        optimizer = tf.get_collection("optimizer")[0]
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)  

    for epoch_i in range(1, epochs+1):
        for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(
                get_batches(train_target, train_source, batch_size,
                           source_letter_to_int['<PAD>'],
                           target_letter_to_int['<PAD>'])):

            # Training step
            _, loss = sess.run(...)

            # Debug message updating us on the status of the training
            if batch_i % display_step == 0 and batch_i > 0:

                # Calculate validation cost and output update to training

    # Save model
#     saver = tf.train.Saver()
    saver.save(sess, checkpoint)

【问题讨论】:

  • 我在你的训练代码中找到了sess.run(tf.global_variables_initializer()) ,技术上adam优化器依赖了一些local variables但是你没有初始化它们,也许你可以在初始化它们之后尝试一下。
  • 谢谢@Jie.Zhou。我已经用另一个破解代码更新了帖子。我初始化了input_datatargets 等,但我没有看到我需要为 Adam 初始化什么。可以给我一些细节吗?
  • sess.run(tf.global_variables_initializer())改成sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())),另外,我注意到你创建了两个saver,你用第二个saver来保存模型,所以张量应该是save_1/Const:0而不是save/Const:0,也许你应该删除其中一个。
  • 谢谢。我已经删除了 cmets,因为它们造成了混乱。最后我只有一个保护程序,一开始只有一个加载程序。我按照您的建议更改了sess.run()(见上文),现在loader.restore(sess, checkpoint) 给了我TypeError: Cannot interpret feed_dict key as Tensor: The name 'save/Const:0' refers to a Tensor which does not exist. The operation, 'save/Const', does not exist in the graph,这又回到了“原始”错误。
  • 根据你提到的帖子([Answer by Drag0][1])saver = tf.train.Saver()应该在sess = tf.Session()tf.train.write_graph()[1]之前:stackoverflow.com/a/40788998

标签: tensorflow


【解决方案1】:

optimizer = tf.get_collection("optimization")[0] 在尝试恢复保存的图形时抛出了 IndexError: list index out of range,原因很简单,即在构建图形时它没有“命名”,因此图形中没有任何东西称为“优化器”。

训练步骤_, loss = sess.run([train_op, cost], {input_data: sources_batch, targets: targets_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) 需要input_datatargetslrtarget_sequence_lengthsource_sequence_lengthkeep_prob。可以看出,所有这些都是用这段代码恢复的:

    # Restore variables
    input_data = train_graph.get_tensor_by_name('input:0')
    targets = train_graph.get_tensor_by_name('targets:0')
    lr = train_graph.get_tensor_by_name('learning_rate:0')
    source_sequence_length = train_graph.get_tensor_by_name('source_sequence_length:0')
    target_sequence_length = train_graph.get_tensor_by_name('target_sequence_length:0')
    keep_prob = train_graph.get_tensor_by_name('keep_prob:0')

这很有效,因为在构建图表时,我用input_data = tf.placeholder(tf.int32, [None, None], name='input') 之类的东西“命名”了这些变量中的每一个。

不过,另外,训练步骤需要train_opcost。 (值得注意的是,它并不直接需要optimizer。我注意到了这一点,而我天真地尝试生成train_op 并没有奏效。)

最终,解决方案非常简单。在我构建图形的代码中,在创建train_opcost 之后立即运行tf.add_to_collection("train_op", train_op)tf.add_to_collection("cost", cost)。这些语句“命名”了图中的操作,以便我以后可以得到它们。然后,在训练例程中,在恢复上面的变量之后,我运行这个:

    # Grab the optimizer variables that were added to the collection during build
    cost = tf.get_collection("cost")[0]
    train_op = tf.get_collection("train_op")[0]

现在这两个都可以工作了,保存的图表被加载,所有必要的变量和操作都被识别出来,训练从中断的地方开始。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2021-03-30
    • 2021-11-29
    • 1970-01-01
    • 1970-01-01
    • 2019-12-05
    • 2018-01-07
    • 2017-07-28
    • 1970-01-01
    相关资源
    最近更新 更多