我在keras-rl中有一个自定义环境,构造函数中有以下配置
def __init__(self, data):
#Declare the episode as the first episode
self.episode=1
#Initialize data
self.data=data
#Declare low and high as vectors with -inf values
self.low = numpy.array([-numpy.inf])
self.high = numpy.array([+numpy.inf])
self.observation_space = spaces.Box(self.low, self.high, dtype=numpy.float32)
#Define the space of actions as 3 (I want them to be 0, 1 and 2)
self.action_space = spaces.Discrete(3)
self.currentObservation = 0
self.limit = len(data)
#Initiates the values to be returned by the environment
self.reward = None如您所见,我的代理将执行3个操作,根据操作的不同,将在下面的函数步骤()中计算不同的奖励:
def step(self, action):
assert self.action_space.contains(action)
#Initiates the reward
self.reward=0
#get the reward
self.possibleGain = self.data.iloc[self.currentObservation]['delta_next_day']
#If action is 1, calculate the reward
if(action == 1):
self.reward = self.possibleGain-self.operationCost
#If action is 2, calculate the reward as negative
elif(action==2):
self.reward = (-self.possibleGain)-self.operationCost
#If action is 0, no reward
elif(action==0):
self.reward = 0
#Finish episode
self.done=True
self.episode+=1
self.currentObservation+=1
if(self.currentObservation>=self.limit):
self.currentObservation=0
#Return the state, reward and if its done or not
return self.getObservation(), self.reward, self.done, {}问题是,如果我在每一集打印动作,它们是0,2和4。我希望它们是0,1和2。我怎么能强迫代理只识别这三个动作?
发布于 2020-04-12 07:52:39
我不知道为什么self.action_space = spaces.Discrete(3)将操作作为0,2,4,因为我不能用您发布的代码片段来重现您的错误,所以我建议使用以下方法来定义您的操作
self.action_space = gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)这就是当我从行动空间取样时得到的。
actions= gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)
for i in range(10):
print(actions.sample())
[1]
[3]
[2]
[2]
[3]
[3]
[1]
[1]
[2]
[3]https://stackoverflow.com/questions/61058333
复制相似问题