【发布时间】:2018-07-02 23:53:09
【问题描述】:
我正在阅读这篇博客
https://smist08.wordpress.com/2016/10/04/the-road-to-tensorflow-part-10-more-on-optimization/
它提到了所有 tensorflow 的学习率
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
optimizer = tf.train.AdadeltaOptimizer(starter_learning_rate).minimize(loss)
optimizer = tf.train.AdagradOptimizer(starter_learning_rate).minimize(loss) # promising
optimizer = tf.train.AdamOptimizer(starter_learning_rate).minimize(loss) # promising
optimizer = tf.train.MomentumOptimizer(starter_learning_rate, 0.001).minimize(loss) # diverges
optimizer = tf.train.FtrlOptimizer(starter_learning_rate).minimize(loss) # promising
optimizer = tf.train.RMSPropOptimizer(starter_learning_rate).minimize(loss) # promising
它说你输入的学习率只是入门学习率。这是否意味着如果您在训练过程中更改学习率,该更改将无效,因为它不再使用初始学习率?
我尝试查看 API 文档,但没有具体说明。
【问题讨论】:
标签: python tensorflow machine-learning deep-learning