【问题标题】:how use tensorflow placeholder to use in get_collection如何在 get_collection 中使用 tensorflow 占位符
【发布时间】:2016-08-02 12:13:39
【问题描述】:

所以,我对提要变量有一些问题。我想要我的模型在 epoch 上的冻结权重和偏差。我有下一个变量:

wc1 = tf.Variable(tf.random_normal([f1, f1, _channel, n1], mean=0, stddev=0.01), name="wc1")
wc2 = tf.Variable(tf.random_normal([f2, f2, n1, n2], mean=0, stddev=0.01), name="wc2")
wc3 = tf.Variable(tf.random_normal([f3, f3, n2, _channel], mean=0, stddev=0.01), name="wc3") 

bc1 = tf.Variable(tf.random_normal(shape=[n1], mean=0, stddev=0.01), name="bc1")
bc2 = tf.Variable(tf.random_normal(shape=[n2], mean=0, stddev=0.01), name="bc2")
bc3 = tf.Variable(tf.random_normal(shape=[_channel], mean=0, stddev=0.01), name="bc3")

例如,我想在前 10 个 epoch 训练 [wc1, bc1],然后在下一个 epoch 训练 [wc2, bc2],依此类推。为此,我创建了变量集合:

tf.add_to_collection('wc1', wc1)
tf.add_to_collection('wc1', bc1)

tf.add_to_collection('wc2', wc2)
tf.add_to_collection('wc2', bc2)

并为集合名称创建占位符:

trainable_name = tf.placeholder(tf.string, shape=[])

接下来我尝试在我的优化器中获取它:

opt = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = opt.minimize(cost, var_list=tf.get_collection(trainable_name))

饲料数据:

sess.run(train_op, feed_dict={ ... , trainable_name: "wc1"})

我得到错误:

 Traceback (most recent call last):
  File "/home/keeper121/PycharmProjects/super/sp_train.py", line 292, in <module>
    train(tiles_names, "model.ckpt")
  File "/home/keeper121/PycharmProjects/super/sp_train.py", line 123, in train
    train_op = opt.minimize(cost, var_list=tf.get_collection(trainable_name))
  File "/home/keeper121/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 193, in minimize
    grad_loss=grad_loss)
  File "/home/keeper121/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 244, in compute_gradients
    raise ValueError("No variables to optimize")
ValueError: No variables to optimize

那么,有什么方法可以在会话期间更改训练变量?

谢谢。

【问题讨论】:

    标签: tensorflow freeze pre-trained-model


    【解决方案1】:

    尝试以下方法:

    train_op_wc1 = opt.minimize(cost, var_list=tf.get_collection("wc1"))
    train_op_wc2 = opt.minimize(cost, var_list=tf.get_collection("wc2"))
    

    然后当你提供数据时:

    #define your samples as you would always do
    input_feed = ...
    #then use the training op that addresses the correct layers, as you defined above
    if first_10_epoch:
      sess.run(train_op_wc1, feed_dict=input_feed)
    else:
      sess.run(train_op_wc2, feed_dict=input_feed)
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2019-02-25
      • 2017-10-12
      • 2018-04-23
      • 2017-07-27
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多