【问题标题】:Why is my reward function returning None in Python?为什么我的奖励函数在 Python 中返回 None?
【发布时间】:2019-09-17 14:04:04
【问题描述】:

好的,所以,我正在尝试使用 keras 和 tensorflow 制作一个内在好奇心代理。该代理的奖励函数是自动编码器在前一个状态和当前状态之间的损失以及自动编码器在当前状态和想象的下一个状态之间的损失之差。然而,这个奖励函数总是返回 None 而不是实际的差异。我试过打印损失,但它总是给出正确的值。

奖励函数/重播代码:

    def replay(self, batch):
        minibatch = R.sample(self.memory, batch)

        for prev_state, actions, state, reward, imagined_next_state in minibatch:
            target = []

            imagined_next_state = np.add(np.random.random(self.state_size), imagined_next_state)
            target_m = self.model.predict(state)
            for i in range(len(target_m)):
                target_m[i][0][actions[i]]=reward

            history_m = self.model.fit(state, target_m, epochs=1, verbose=0)
            history_ae_ps = self.autoencoder.fit(prev_state, state, epochs=1, verbose=0)
            history_ae_ns = self.autoencoder.fit(state, imagined_next_state, epochs=1, verbose=0)

            loss_m = history_m.history['loss'][-1]
            loss_ae_ps = history_ae_ps.history['loss'][-1]
            loss_ae_ns = history_ae_ns.history['loss'][-1]
            print("LOSS AE PS:", loss_ae_ps)
            print("LOSS AE NS:", loss_ae_ns)

            loss_ae = loss_ae_ns - loss_ae_ps
            print(reward, loss_ae)
            return loss_ae

代理环境循环代码:

    def loop(self, times='inf'):
        if times is 'inf':
            times = 2**31

        reward = 0.0001
        prev_shot = self.get_shot()

        for i in range(times):
            acts, ins, act_probs, shot = self.get_act()

            act_0 = acts[0]
            act_1 = acts[1]
            act_2 = acts[2]
            act_3 = acts[3]

            self.act_to_mouse(act_0, act_1)
            self.act_to_click(act_2)
            self.act_to_keys(act_3)

            reward = self.remember_and_replay(prev_shot, acts, shot, reward, ins)
            if reward is None:
                raise(RewardError("Rewards are none."))
            prev_shot = shot

【问题讨论】:

    标签: python tensorflow keras reinforcement-learning reward


    【解决方案1】:

    我刚刚在输入问题时解决了它。我只是没有在 remember_and_replay 方法中返回奖励......

    remember_and_replay 方法如下所示:

    def remember_and_replay(self, prev_shot, action, shot, reward, ins):
            self.dqn.remember(prev_shot, action, shot, reward, ins)
            self.dqn.replay(1)
    
    

    什么时候应该是这样的:

    def remember_and_replay(self, prev_shot, action, shot, reward, ins):
            self.dqn.remember(prev_shot, action, shot, reward, ins)
            rew = self.dqn.replay(1)
            return rew
    

    希望我能帮助到其他人。 :)

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2019-03-03
      • 2021-12-23
      • 2017-12-10
      • 2021-11-13
      相关资源
      最近更新 更多