我现在正在尝试优化我的机器人的导航。我首先使用了一个普通的DQN,在那里我优化了参数。模拟机器人在5000集后达到了8000个目标,并表现出令人满意的学习性能。现在,由于DQN在强化学习中“不是最好的”,我添加了DoubleDQN。不幸的是,这一个在同样的条件下表现得非常糟糕。我的第一个问题是,如果我正确地实现了DDQN,第二个问题是,目标网络应该多久优化一次?现在,它在每一集之后都会进行优化。一集可以走到500步(如果没有崩溃)。我可以想象更频繁地更新目标(即每20步)。但我不知道目标是如何一直能够禁止高估原始网络的行为的?
以下是正常的DQN训练部分:
def getQvalue(self, reward, next_target, done):
if done:
return reward
else:
return reward + self.discount_factor * np.amax(next_target)
def getAction(self, state):
if np.random.rand() <= self.epsilon:
self.q_value = np.zeros(self.action_size)
return random.randrange(self.action_size)
else:
q_value = self.model.predict(state.reshape(1, len(state)))
self.q_value = q_value
return np.argmax(q_value[0])
def trainModel(self, target=False):
mini_batch = random.sample(self.memory, self.batch_size)
X_batch = np.empty((0, self.state_size), dtype=np.float64)
Y_batch = np.empty((0, self.action_size), dtype=np.float64)
for i in range(self.batch_size):
states = mini_batch[i][0]
actions = mini_batch[i][1]
rewards = mini_batch[i][2]
next_states = mini_batch[i][3]
dones = mini_batch[i][4]
q_value = self.model.predict(states.reshape(1, len(states)))
self.q_value = q_value
if target:
next_target = self.target_model.predict(next_states.reshape(1, len(next_states)))
else:
next_target = self.model.predict(next_states.reshape(1, len(next_states)))
next_q_value = self.getQvalue(rewards, next_target, dones)
X_batch = np.append(X_batch, np.array([states.copy()]), axis=0)
Y_sample = q_value.copy()
Y_sample[0][actions] = next_q_value
Y_batch = np.append(Y_batch, np.array([Y_sample[0]]), axis=0)
if dones:
X_batch = np.append(X_batch, np.array([next_states.copy()]), axis=0)
Y_batch = np.append(Y_batch, np.array([[rewards] * self.action_size]), axis=0)
self.model.fit(X_batch, Y_batch, batch_size=self.batch_size, epochs=1, verbose=0)下面是双DQN的更新:
def getQvalue(self, reward, next_target, next_q_value_1, done):
if done:
return reward
else:
a = np.argmax(next_q_value_1[0])
return reward + self.discount_factor * next_target[0][a]
def getAction(self, state):
if np.random.rand() <= self.epsilon:
self.q_value = np.zeros(self.action_size)
return random.randrange(self.action_size)
else:
q_value = self.model.predict(state.reshape(1, len(state)))
self.q_value = q_value
return np.argmax(q_value[0])
def trainModel(self, target=False):
mini_batch = random.sample(self.memory, self.batch_size)
X_batch = np.empty((0, self.state_size), dtype=np.float64)
Y_batch = np.empty((0, self.action_size), dtype=np.float64)
for i in range(self.batch_size):
states = mini_batch[i][0]
actions = mini_batch[i][1]
rewards = mini_batch[i][2]
next_states = mini_batch[i][3]
dones = mini_batch[i][4]
q_value = self.model.predict(states.reshape(1, len(states)))
self.q_value = q_value
if target:
next_q_value_1 = self.model.predict(next_states.reshape(1, len(next_states)))
next_target = self.target_model.predict(next_states.reshape(1, len(next_states)))
else:
next_q_value_1 = self.model.predict(next_states.reshape(1, len(next_states)))
next_target = self.model.predict(next_states.reshape(1, len(next_states)))
# next_q_value = self.getQvalue(rewards, next_target, next_q_value_1, dones)
X_batch = np.append(X_batch, np.array([states.copy()]), axis=0)
Y_sample = q_value.copy()
Y_sample[0][actions] = next_q_value
Y_batch = np.append(Y_batch, np.array([Y_sample[0]]), axis=0)
if dones:
X_batch = np.append(X_batch, np.array([next_states.copy()]), axis=0)
Y_batch = np.append(Y_batch, np.array([[rewards] * self.action_size]), axis=0)
self.model.fit(X_batch, Y_batch, batch_size=self.batch_size, epochs=1, verbose=0)基本上,更改发生在getQvalue部分,我从原始网络中选择操作,然后从目标网络中选择该操作的操作值。如果目标确定,则仅在2000个全局步骤之后使用目标网络。在它应该有意义之前(~前10集)提前致以最好的问候和感谢!
发布于 2019-11-08 00:49:59
你不应该在每一集都更新目标网络。因为引入了目标网络来稳定Q值训练。根据环境的不同,更新频率应在100%、1000%或10000之间。
你可以检查这个问题。我已经修改了问题的代码:Cartpole-v0 loss increasing using DQN
https://stackoverflow.com/questions/58657282
复制相似问题