好的,因此,我正在尝试使用keras和tensorflow来创建一个内在的好奇心代理。该智能体的奖励函数是自动编码器在前一状态和当前状态之间的损失以及自动编码器在当前状态和想象的下一状态之间的损失的差值。然而,这个奖励函数总是返回None,而不是实际的差值。我试着把损失打印出来,但它总是给出正确的值。
奖励函数/重放代码:
def replay(self, batch):
minibatch = R.sample(self.memory, batch)
for prev_state, actions, state, reward, imagined_next_state in minibatch:
target = []
imagined_next_state = np.add(np.random.random(self.state_size), imagined_next_state)
target_m = self.model.predict(state)
for i in range(len(target_m)):
target_m[i][0][actions[i]]=reward
history_m = self.model.fit(state, target_m, epochs=1, verbose=0)
history_ae_ps = self.autoencoder.fit(prev_state, state, epochs=1, verbose=0)
history_ae_ns = self.autoencoder.fit(state, imagined_next_state, epochs=1, verbose=0)
loss_m = history_m.history['loss'][-1]
loss_ae_ps = history_ae_ps.history['loss'][-1]
loss_ae_ns = history_ae_ns.history['loss'][-1]
print("LOSS AE PS:", loss_ae_ps)
print("LOSS AE NS:", loss_ae_ns)
loss_ae = loss_ae_ns - loss_ae_ps
print(reward, loss_ae)
return loss_ae代理环境循环代码:
def loop(self, times='inf'):
if times is 'inf':
times = 2**31
reward = 0.0001
prev_shot = self.get_shot()
for i in range(times):
acts, ins, act_probs, shot = self.get_act()
act_0 = acts[0]
act_1 = acts[1]
act_2 = acts[2]
act_3 = acts[3]
self.act_to_mouse(act_0, act_1)
self.act_to_click(act_2)
self.act_to_keys(act_3)
reward = self.remember_and_replay(prev_shot, acts, shot, reward, ins)
if reward is None:
raise(RewardError("Rewards are none."))
prev_shot = shot发布于 2019-09-17 22:04:05
我只是在输入问题的时候解决了这个问题。我只是没有在remember_and_replay方法中返回奖励...
remember_and_replay方法如下所示:
def remember_and_replay(self, prev_shot, action, shot, reward, ins):
self.dqn.remember(prev_shot, action, shot, reward, ins)
self.dqn.replay(1)它本应该是这样的:
def remember_and_replay(self, prev_shot, action, shot, reward, ins):
self.dqn.remember(prev_shot, action, shot, reward, ins)
rew = self.dqn.replay(1)
return rew希望我对其他人有所帮助。:)
https://stackoverflow.com/questions/57976012
复制相似问题