我试图在预定义的估计器中使用tf.train.exponential_decay,但由于某些原因,这是非常困难的。我是不是漏掉了什么?
下面是带有常量learning_rate的旧代码:
classifier = tf.estimator.DNNRegressor(
feature_columns=f_columns,
model_dir='./TF',
hidden_units=[2, 2],
optimizer=tf.train.ProximalAdagradOptimizer(
learning_rate=0.50,
l1_regularization_strength=0.001,
))现在我试着添加以下内容:
starter_learning_rate = 0.50
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
10000, 0.96, staircase=True)但是现在呢?
"ValueError: Tensor("ExponentialDecay:0",shape=(),dtype=float32)必须来自与张量相同的图(“dnn/hiddenlayer_0/ same /part 0:0”,shape=(62,2),dtype=float32_ref)。
非常感谢你的帮助。我用的是TF1.6。
发布于 2020-03-06 03:06:28
您应该让优化器在模式== tf.estimator.ModeKeys.TRAIN下进行。
这是示例代码
def _model_fn(features, labels, mode, config):
# xxxxxxxxx
# xxxxxxxxx
assert mode == tf.estimator.ModeKeys.TRAIN
global_step = tf.train.get_global_step()
decay_learning_rate = tf.train.exponential_decay(learning_rate, global_step, 100, 0.98, staircase=True)
optimizer = adagrad.AdagradOptimizer(decay_learning_rate)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op, training_chief_hooks=chief_hooks, eval_metric_ops=metrics)https://stackoverflow.com/questions/49224141
复制相似问题