【发布时间】:2018-09-07 17:10:21
【问题描述】:
所以我试图在 tensorflow 中实现 DQN 算法,并且我已经定义了如下给出的损失函数,但是每当我使用 ADAM 优化器执行权重更新时,在 2-3 次更新后,我的所有变量都变成了 nan。任何想法可能是什么问题。我的操作可以采用 (0,10) 之间的整数值。知道我会发生什么吗?
def Q_Values_of_Given_State_Action(self, actions_, y_targets):
self.dense_output=self.dense_output #Output of the online network which given the Q values of all the actions in the current state
actions_=tf.reshape(tf.cast(actions_, tf.int32), shape=(Mini_batch,1)) #Actions which was taken by the online network
z=tf.reshape(tf.range(tf.shape(self.dense_output)[0]), shape=(Mini_batch,1) )
index_=tf.concat((z,actions_), axis=-1)
self.Q_Values_Select_Actions=tf.gather_nd(self.dense_output, index_)
self.loss_=tf.divide((tf.reduce_sum (tf.square(self.Q_Values_Select_Actions-y_targets))), 2)
return self.loss_
【问题讨论】:
标签: tensorflow machine-learning deep-learning reinforcement-learning loss-function