首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何在R程序中获得网格世界模型的SARSA代码?

如何在R程序中获得网格世界模型的SARSA代码?
EN

Stack Overflow用户
提问于 2016-12-14 19:30:53
回答 1查看 460关注 0票数 2

我在我的研究案例中有一个问题。我对网格世界模型的强化学习很感兴趣。模型是一个7x7的迷宫,用于运动。考虑一个迷宫般的田野。有四个方向:上、下、左、右(或N、E、S、W)。因此,最多只有几个策略。当使用撞到墙上的直接惩罚时,许多可以被排除在外。此外,还采用了禁止返回原则,通常甚至更少的行动是可以接受的。许多策略只是在目标之后的部分有所不同,或者是等效的。

状态:有障碍物奖励: if r=1 if s=G,else r=0 for任何允许的移动,否则r=-100初始化: Q0(a,s)~N(0,0.01)

为了解决这个模型,我做了一个R代码,但它不能正常工作。

型号: 7x7,S:开始状态,G:终止状态,O:可访问状态,X:墙壁

代码语言:javascript
复制
 [O,O,G,X,O,O,S]
 [O,X,O,X,O,X,X]
 [O,X,O,X,O,O,O]
 [O,X,O,X,O,X,O]
 [O,X,O,O,O,X,O]
 [O,X,O,X,O,X,O]
 [O,O,O,X,O,O,O]

所以我想知道如何纠正这个网格世界模型的代码(而不是uppon代码),并想知道如何通过SARSA模型来解决这个模型。

代码语言:javascript
复制
actions <- c("N", "S", "E", "W")

x <- 1:7
y <- 1:7

rewards <- matrix(rep(0, 49), nrow=7)

 rewards[1, 1] <- 0
 rewards[1, 2] <- 0
 rewards[1, 3] <- 1
 rewards[1, 4] <- -100
 rewards[1, 5] <- 0
 rewards[1, 6] <- 0
 rewards[1, 7] <- 0
 rewards[2, 1] <- 0
 rewards[2, 2] <- -100
 rewards[2, 3] <- 0
 rewards[2, 4] <- -100
 rewards[2, 5] <- 0
 rewards[2, 6] <- -100
 rewards[2, 7] <- -100
 rewards[3, 1] <- 0
 rewards[3, 2] <- -100
 rewards[3, 3] <- 0
 rewards[3, 4] <- -100
 rewards[3, 5] <- 0
 rewards[3, 6] <- 0
 rewards[3, 7] <- 0
 rewards[4, 1] <- 0
 rewards[4, 2] <- -100
 rewards[4, 3] <- 0
 rewards[4, 4] <- -100
 rewards[4, 5] <- 0
 rewards[4, 6] <- -100
 rewards[4, 7] <- 0
 rewards[5, 1] <- 0
 rewards[5, 2] <- -100
 rewards[5, 3] <- 0
 rewards[5, 4] <- 0
 rewards[5, 5] <- 0
 rewards[5, 6] <- -100
 rewards[5, 7] <- 0
 rewards[6, 1] <- 0
 rewards[6, 2] <- -100
 rewards[6, 3] <- 0
 rewards[6, 4] <- -100
 rewards[6, 5] <- 0
 rewards[6, 6] <- -100
 rewards[6, 7] <- 0
 rewards[7, 1] <- 0
 rewards[7, 2] <- 0
 rewards[7, 3] <- 0
 rewards[7, 4] <- -100
 rewards[7, 5] <- 0
 rewards[7, 6] <- 0
 rewards[7, 7] <- 0

 values <- rewards # initial values

 states <- expand.grid(x=x, y=y)

 # Transition probability
 transition <- list("N" = c("N" = 0.8, "S" = 0, "E" = 0.1, "W" = 0.1), 
         "S"= c("S" = 0.8, "N" = 0, "E" = 0.1, "W" = 0.1),
         "E"= c("E" = 0.8, "W" = 0, "S" = 0.1, "N" = 0.1),
         "W"= c("W" = 0.8, "E" = 0, "S" = 0.1, "N" = 0.1))

 # The value of an action (e.g. move north means y + 1)
 action.values <- list("N" = c("x" = 0, "y" = 1), 
         "S" = c("x" = 0, "y" = -1),
         "E" = c("x" = 1, "y" = 0),
         "W" = c("x" = -1, "y" = 0))

 # act() function serves to move the robot through states based on an action
 act <- function(action, state) {
     action.value <- action.values[[action]]
     new.state <- state
         if(state["x"] == 1 && state["y"] == 7 || (state["x"] == 1 && state["y"] == 3))
         return(state)
     #
     new.x = state["x"] + action.value["x"]
     new.y = state["y"] + action.value["y"]
     # Constrained by edge of grid
     new.state["x"] <- min(x[length(x)], max(x[1], new.x))
     new.state["y"] <- min(y[length(y)], max(y[1], new.y))
     #
     if(is.na(rewards[new.state["y"], new.state["x"]]))
         new.state <- state
     #
     return(new.state)
 }


 rewards

 bellman.update <- function(action, state, values, gamma=1) {
     state.transition.prob <- transition[[action]]
     q <- rep(0, length(state.transition.prob))
     for(i in 1:length(state.transition.prob)) {        
         new.state <- act(names(state.transition.prob)[i], state) 
         q[i] <- (state.transition.prob[i] * (rewards[state["y"],        state["x"]] + (gamma * values[new.state["y"], new.state["x"]])))
     }
     sum(q)
 }

 value.iteration <- function(states, actions, rewards, values, gamma, niter,      n) {
     for (j in 1:niter) {
         for (i in 1:nrow(states)) {
             state <- unlist(states[i,])
             if(i %in% c(7, 15)) next # terminal states
             q.values <- as.numeric(lapply(actions, bellman.update,      state=state, values=values, gamma=gamma))
             values[state["y"], state["x"]] <- max(q.values)
         }
     }
     return(values)
 }

 final.values <- value.iteration(states=states, actions=actions,      rewards=rewards, values=values, gamma=0.99, niter=100, n=10)

 final.values
EN

回答 1

Stack Overflow用户

发布于 2017-01-09 20:45:05

问题是你的惩罚比奖励要大得多。代理人可能更喜欢把自己扔到墙上,而不是试图获得奖励。这是因为状态-动作值收敛到非常低的实数,甚至低于-100,这取决于动作的奖励。

下面是我模拟Value迭代的模型(它表示SARSA应该收敛到的值):

值表表示图片中模型的值状态,但它是反转的(因为我还没有修复它)。

在这种情况下,我将奖励和惩罚的值与您的模型非常相似。-15是一个中立的状态(一堵墙),1.0是球,-100是积木。代理为每个操作获得0.0,并且转换概率也是相同的。

智能体必须到达球,但正如您所看到的,状态收敛到非常小的值。在这里,您可以看到球的相邻状态具有较低的值。因此,智能体更喜欢永远不会达到它的目标。

为了解决你的问题,试着减少惩罚。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/41141498

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档