首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >DQN不学习

DQN不学习
EN

Stack Overflow用户
提问于 2020-12-21 17:53:30
回答 1查看 166关注 0票数 1

我试图在CarPole环境中使用Pytorch实现一个DQN。我不知道为什么,但无论我试着训练经纪人多长时间,即使分数普遍增加,他们只是波动,没有保持高分。代码来自为tensorflow编写的DQN教程,该教程正常运行,但当我试图转换为Py手电时,它就学不到了。下面是模型:

代码语言:javascript
复制
class Net(nn.Module):
def __init__(self, state_size, action_size):
    super(Net, self).__init__()
    self.fc1 = nn.Linear(state_size, 24)
    self.fc2 = nn.Linear(24, 24)
    self.fc3 = nn.Linear(24, action_size)
    
def forward(self, inputs):
    x = torch.from_numpy(inputs)
    x = F.relu(self.fc1(x.float()))
    x = F.relu(self.fc2(x))
    x = self.fc3(x)
    return x

class DQNAgent(nn.Module):
def __init__(self, state_size, action_size):
    super(DQNAgent, self).__init__()
    self.state_size = state_size
    self.action_size = action_size
    self.memory = deque(maxlen=2000)
    self.model, self.criterion, self.optimizer = self.build_model()
    
    self.epsilon = 1.0
    self.epsilon_decay = 0.995
    self.epsilon_min = 0.01
    self.gamma = 0.95
    
def build_model(self):
    model = Net(state_size, action_size)
    model = model.float()
    
    criterion = nn.MSELoss()
    optimizer = optim.Adam(model.parameters(), lr=0.001) # might need to return criterion and optimizer
    
    return model, criterion, optimizer

def remember(self, state, action, reward, next_state, done):
    self.memory.append((state, action, reward, next_state, done))
    
def act(self, state):
    if np.random.rand() <= self.epsilon:
        return random.randrange(self.action_size)
    act_values = self.model(state)
    return np.argmax(act_values.detach().numpy())

def replay(self, batch_size):
    minibatch = random.sample(self.memory, batch_size)
    for state, action, reward, next_state, done in minibatch:
        if done:
            target = reward
        elif not done:
            target = reward + self.gamma*torch.max(self.model(next_state)) # --> a tensor
            
            
        target_f = self.model(state)
        target_f[0][action] = target
        
        # self.model.fit(state, target_f, epochs=1, verbose=0)
        loss = self.criterion(self.model(state), target_f)
        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()
        
    if self.epsilon > self.epsilon_min:
        self.epsilon = self.epsilon*self.epsilon_decay
        
def load(self, name):
    pass

def save(self, name):
    pass

..。和火车:

代码语言:javascript
复制
for e in range(n_episodes):
state = env.reset()
state = np.reshape(state, [1, state_size])

for time in range(5000):
    # env.render()
    action = agent.act(state)
    
    next_state, reward, done, _ = env.step(action)
    reward = reward if not done else -10
    
    next_state = np.reshape(next_state, [1, state_size])
    
    agent.remember(state, action, reward, next_state, done)
    
    state = next_state
    
    if done:
        print("Episode {}/{}, score: {}, e: {:.2}".format(e, n_episodes, time, agent.epsilon))
        break
        
if len(agent.memory) > batch_size:
    agent.replay(batch_size)
    memory = agent.memory

如果有人能给我任何建议/建议,我将不胜感激!现在很困惑。谢谢!

EN

回答 1

Stack Overflow用户

发布于 2022-04-17 07:45:53

如果模型不稳定,则在输入网络之前,最好将变量"state“和"next_state”转换为两个张量。这是你的代码:

代码语言:javascript
复制
elif not done:
     target = reward + self.gamma*torch.max(self.model(next_state))

在将状态数据输入网络之前,应该添加以下代码:

代码语言:javascript
复制
     next_state = torch.tensor(next_state)

对于根据状态计算的Q值,可以将代码更改为:

代码语言:javascript
复制
target_f = self.model(torch.tensor(state), requires_grad=True)
代码语言:javascript
复制
loss = self.criterion(self.model(torch.tensor(state, requires_grad=True)), target_f)
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/65397660

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档