我正在尝试实现Dueling DQN,但如果我以这种方式构建NN架构,它看起来并不是在学习
X_input = Input(shape=(self.state_size,))
X = X_input
X = Dense(512, input_shape= (self.state_size,), activation="relu")(X_input)
X = Dense(260, activation="relu")(X)
X = Dense(100, activation="relu")(X)
state_value = Dense(1)(X)
state_value = Lambda(lambda v: v, output_shape=(self.action_size,))(state_value)
action_advantage = Dense(self.action_size)(X)
action_advantage = Lambda(lambda a: a[:, :] - K.mean(a[:, :], keepdims=True), output_shape=(self.action_size,))(action_advantage)
X = Add()([state_value, action_advantage])
model = Model(inputs = X_input, outputs = X)
model.compile(loss="mean_squared_error", optimizer=Adam(lr=self.learning_rate))
return model我在网上搜索,发现了一些代码(它们比我的代码工作得好得多),唯一的区别是
state_value = Lambda(lambda s: K.expand_dims(s[:, 0],-1), output_shape=(self.action_size,))(state_value)链接到代码https://github.com/pythonlessons/Reinforcement_Learning/blob/master/03_CartPole-reinforcement-learning_Dueling_DDQN/Cartpole_Double_DDQN.py#L31我不明白为什么我的不是(学习),因为它运行。我不明白为什么他只取张量每一行的第一个值?
发布于 2020-07-18 03:26:49
扩展状态值的dims可以确保在发生Add()时将其添加到每个advantage值中。
你也可以这样写:去掉lambda函数,然后用下面的方式写出Q值的实际计算:
X = (state_value + (action_advantage - tf.math.reduce_mean(action_advantage, axis=1, keepdims=True)))结果将是相同的,但代码可能更具可读性。
因此,总的来说,您的代码将如下所示:
X_input = Input(shape=(self.state_size,))
X = X_input
X = Dense(512, input_shape= (self.state_size,), activation="relu")(X_input)
X = Dense(260, activation="relu")(X)
X = Dense(100, activation="relu")(X)
state_value = Dense(1)(X)
action_advantage = Dense(self.action_size)(X)
X = (state_value + (action_advantage - tf.math.reduce_mean(action_advantage, axis=1, keepdims=True)))
model = Model(inputs = X_input, outputs = X)
model.compile(loss="mean_squared_error", optimizer=Adam(lr=self.learning_rate))
return modelhttps://stackoverflow.com/questions/62336594
复制相似问题