【发布时间】:2019-11-01 10:02:08
【问题描述】:
我现在正在尝试优化我的机器人的导航。我首先使用了一个香草 DQN,我在其中优化了参数。模拟机器人在 5000 集后达到了 8000 个目标,并表现出令人满意的学习表现。 现在,由于 DQN 在强化学习中不是最好的,我添加了 DoubleDQN。不幸的是,在相同的条件下,那个表现非常糟糕。 我的第一个问题是,如果我正确实施了 DDQN,第二个问题是,目标网络应该多久优化一次?现在,它在每集之后都进行了优化。一集可以走到 500 步(如果没有崩溃)。我可以想象更频繁地更新目标(即每 20 步)。但是我不知道目标是如何一直能够禁止原始网络的高估行为的?
这里是正常的 DQN 训练部分:
def getQvalue(self, reward, next_target, done):
if done:
return reward
else:
return reward + self.discount_factor * np.amax(next_target)
def getAction(self, state):
if np.random.rand() <= self.epsilon:
self.q_value = np.zeros(self.action_size)
return random.randrange(self.action_size)
else:
q_value = self.model.predict(state.reshape(1, len(state)))
self.q_value = q_value
return np.argmax(q_value[0])
def trainModel(self, target=False):
mini_batch = random.sample(self.memory, self.batch_size)
X_batch = np.empty((0, self.state_size), dtype=np.float64)
Y_batch = np.empty((0, self.action_size), dtype=np.float64)
for i in range(self.batch_size):
states = mini_batch[i][0]
actions = mini_batch[i][1]
rewards = mini_batch[i][2]
next_states = mini_batch[i][3]
dones = mini_batch[i][4]
q_value = self.model.predict(states.reshape(1, len(states)))
self.q_value = q_value
if target:
next_target = self.target_model.predict(next_states.reshape(1, len(next_states)))
else:
next_target = self.model.predict(next_states.reshape(1, len(next_states)))
next_q_value = self.getQvalue(rewards, next_target, dones)
X_batch = np.append(X_batch, np.array([states.copy()]), axis=0)
Y_sample = q_value.copy()
Y_sample[0][actions] = next_q_value
Y_batch = np.append(Y_batch, np.array([Y_sample[0]]), axis=0)
if dones:
X_batch = np.append(X_batch, np.array([next_states.copy()]), axis=0)
Y_batch = np.append(Y_batch, np.array([[rewards] * self.action_size]), axis=0)
self.model.fit(X_batch, Y_batch, batch_size=self.batch_size, epochs=1, verbose=0)
这里是双 DQN 的更新:
def getQvalue(self, reward, next_target, next_q_value_1, done):
if done:
return reward
else:
a = np.argmax(next_q_value_1[0])
return reward + self.discount_factor * next_target[0][a]
def getAction(self, state):
if np.random.rand() <= self.epsilon:
self.q_value = np.zeros(self.action_size)
return random.randrange(self.action_size)
else:
q_value = self.model.predict(state.reshape(1, len(state)))
self.q_value = q_value
return np.argmax(q_value[0])
def trainModel(self, target=False):
mini_batch = random.sample(self.memory, self.batch_size)
X_batch = np.empty((0, self.state_size), dtype=np.float64)
Y_batch = np.empty((0, self.action_size), dtype=np.float64)
for i in range(self.batch_size):
states = mini_batch[i][0]
actions = mini_batch[i][1]
rewards = mini_batch[i][2]
next_states = mini_batch[i][3]
dones = mini_batch[i][4]
q_value = self.model.predict(states.reshape(1, len(states)))
self.q_value = q_value
if target:
next_q_value_1 = self.model.predict(next_states.reshape(1, len(next_states)))
next_target = self.target_model.predict(next_states.reshape(1, len(next_states)))
else:
next_q_value_1 = self.model.predict(next_states.reshape(1, len(next_states)))
next_target = self.model.predict(next_states.reshape(1, len(next_states)))
# next_q_value = self.getQvalue(rewards, next_target, next_q_value_1, dones)
X_batch = np.append(X_batch, np.array([states.copy()]), axis=0)
Y_sample = q_value.copy()
Y_sample[0][actions] = next_q_value
Y_batch = np.append(Y_batch, np.array([Y_sample[0]]), axis=0)
if dones:
X_batch = np.append(X_batch, np.array([next_states.copy()]), axis=0)
Y_batch = np.append(Y_batch, np.array([[rewards] * self.action_size]), axis=0)
self.model.fit(X_batch, Y_batch, batch_size=self.batch_size, epochs=1, verbose=0)
基本上,变化发生在 getQvalue 部分,我从原始网络中选择动作,然后从目标网络中选择该动作的动作值。如果目标确保,目标网络仅在 2000 个全局步骤后使用。在它应该有意义之前(〜前10集) 提前致以最诚挚的问候和感谢!
【问题讨论】:
标签: python machine-learning q-learning