首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >RLlib `rollout.py`用于评估吗?

RLlib `rollout.py`用于评估吗?
EN

Stack Overflow用户
提问于 2021-01-19 04:28:42
回答 1查看 964关注 0票数 0

TL;DR: RLlib的rollout命令似乎是在训练网络,而不是评估。

我正在尝试使用Ray RLlib的DQN在定制的模拟器上训练、保存和评估神经网络。为此,我一直在用OpenAI Gym的CartPol-V0环境对工作流进行原型化。在运行rollout命令进行评估时,我发现了一些奇怪的结果。(我使用了用RLlib培训APIs评估经过培训的策略文档编写的完全相同的方法。)

首先,我训练了一个普通的DQN网络,直到它达到200分的episode_reward_mean为止。然后,我使用rllib rollout命令在CartPol-V0中测试了1000集的网络。在前135集中,episode_reward_mean评分很糟糕,从10到200。然而,从第136集开始,比分一直是200分,这是CartPole-v0的满分。

因此,rllib rollout似乎是在训练网络,而不是评估。我知道情况并非如此,因为在rollout.py模块中没有培训代码。但我不得不说,这看起来真的像是训练。否则,如何才能随着更多的插曲的发生而逐渐增加呢?此外,在评估过程的后期,该网络正在“适应”不同的起始职位,这在我看来是培训的证据。

为什么会发生这种事?

我使用的代码如下:

  • 培训
代码语言:javascript
复制
results = tune.run(
                    "DQN",
                    stop={"episode_reward_mean": 200},
                    config={
                            "env": "CartPole-v0",
                            "num_workers": 6
                    },
                    checkpoint_freq=0,
                    keep_checkpoints_num=1,
                    checkpoint_score_attr="episode_reward_mean",
                    checkpoint_at_end=True,
                    local_dir=r"/home/ray_results/CartPole_Evaluation"
)
  • 评估
代码语言:javascript
复制
rllib rollout ~/ray_results/CartPole_Evaluation/DQN_CartPole-v0_13hfd/checkpoint_139/checkpoint-139 \
             --run DQN --env CartPole-v0 --episodes 1000
  • 结果
代码语言:javascript
复制
2021-01-12 17:26:48,764 INFO trainable.py:489 -- Current state after restoring: {'_iteration': 77, '_timesteps_total': None, '_time_total': 128.41606998443604, '_episodes_total': 819}
Episode #0: reward: 21.0
Episode #1: reward: 13.0
Episode #2: reward: 13.0
Episode #3: reward: 27.0
Episode #4: reward: 26.0
Episode #5: reward: 14.0
Episode #6: reward: 16.0
Episode #7: reward: 22.0
Episode #8: reward: 25.0
Episode #9: reward: 17.0
Episode #10: reward: 16.0
Episode #11: reward: 31.0
Episode #12: reward: 10.0
Episode #13: reward: 23.0
Episode #14: reward: 17.0
Episode #15: reward: 41.0
Episode #16: reward: 46.0
Episode #17: reward: 15.0
Episode #18: reward: 17.0
Episode #19: reward: 32.0
Episode #20: reward: 25.0
...
Episode #114: reward: 134.0
Episode #115: reward: 90.0
Episode #116: reward: 38.0
Episode #117: reward: 33.0
Episode #118: reward: 36.0
Episode #119: reward: 114.0
Episode #120: reward: 183.0
Episode #121: reward: 200.0
Episode #122: reward: 166.0
Episode #123: reward: 200.0
Episode #124: reward: 155.0
Episode #125: reward: 181.0
Episode #126: reward: 72.0
Episode #127: reward: 200.0
Episode #128: reward: 54.0
Episode #129: reward: 196.0
Episode #130: reward: 200.0
Episode #131: reward: 200.0
Episode #132: reward: 188.0
Episode #133: reward: 200.0
Episode #134: reward: 200.0
Episode #135: reward: 173.0
Episode #136: reward: 200.0
Episode #137: reward: 200.0
Episode #138: reward: 200.0
Episode #139: reward: 200.0
Episode #140: reward: 200.0
...
Episode #988: reward: 200.0
Episode #989: reward: 200.0
Episode #990: reward: 200.0
Episode #991: reward: 200.0
Episode #992: reward: 200.0
Episode #993: reward: 200.0
Episode #994: reward: 200.0
Episode #995: reward: 200.0
Episode #996: reward: 200.0
Episode #997: reward: 200.0
Episode #998: reward: 200.0
Episode #999: reward: 200.0
EN

回答 1

Stack Overflow用户

发布于 2021-01-22 04:26:47

我在射线讨论上发布了同样的问题,得到了一个解决这个问题的答案。

由于我在经过训练的网络上调用rollout,该网络的EpsilonGreedy探测模块设置为10k个步骤,因此代理最初实际上是随机地选择操作。然而,由于它经历了更多的时间步骤,随机性部分减少到0.02,使得网络只选择最佳的动作。这就是为什么恢复的代理在使用rollout调用时似乎在进行训练。

正如斯文·米卡所建议的,解决这个问题的办法是简单地抑制勘探行为以便进行评估:

代码语言:javascript
复制
config:
     evaluation_config:
         explore: false

这导致了代理人评分200的al插曲测试!

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/65785488

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档