我必须使用学习率预热,您可以使用学习率预热开始训练CIFAR-10的VGG-19 CNN,在前10000次迭代(或大约13个时期)中从0.00001到0.1%的学习率。然后对于剩余的训练,您使用学习率0.01,其中学习率衰减用于在80和120个时期将学习率降低10倍。该模型必须训练总共144个时期。
我使用的是Python3和TensorFlow2,其中训练数据集有50000个示例,批处理大小= 64。一个时期内的训练迭代次数= 50000/64 = 781次迭代(约)。如何在代码中同时使用学习率预热和学习率衰减?
目前,我使用学习率衰减:
boundaries = [100000, 110000]
values = [1.0, 0.5, 0.1]
learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values)
print("\nCurrent step value: {0}, LR: {1:.6f}\n".format(optimizer.iterations.numpy(), optimizer.learning_rate(optimizer.iterations)))然而,我不知道如何在学习率衰减的同时使用学习率热身。
帮助?
发布于 2020-12-08 20:21:19
使用Transformers库中的implementation怎么样?
from typing import Callable
import tensorflow as tf
class WarmUp(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(
self,
initial_learning_rate: float,
decay_schedule_fn: Callable,
warmup_steps: int,
power: float = 1.0,
name: str = None,
):
super().__init__()
self.initial_learning_rate = initial_learning_rate
self.warmup_steps = warmup_steps
self.power = power
self.decay_schedule_fn = decay_schedule_fn
self.name = name
def __call__(self, step):
with tf.name_scope(self.name or "WarmUp") as name:
# Implements polynomial warmup. i.e., if global_step < warmup_steps, the
# learning rate will be `global_step/num_warmup_steps * init_lr`.
global_step_float = tf.cast(step, tf.float32)
warmup_steps_float = tf.cast(self.warmup_steps, tf.float32)
warmup_percent_done = global_step_float / warmup_steps_float
warmup_learning_rate = self.initial_learning_rate * tf.math.pow(warmup_percent_done, self.power)
return tf.cond(
global_step_float < warmup_steps_float,
lambda: warmup_learning_rate,
lambda: self.decay_schedule_fn(step - self.warmup_steps),
name=name,
)
def get_config(self):
return {
"initial_learning_rate": self.initial_learning_rate,
"decay_schedule_fn": self.decay_schedule_fn,
"warmup_steps": self.warmup_steps,
"power": self.power,
"name": self.name,
}发布于 2020-08-04 19:43:17
通过将学习率调度器设置为lr参数,可以将学习率调度器传递给任何优化器。例如:
from tensorlow.keras.optimizers import schedules, RMSProp
boundaries = [100000, 110000]
values = [1.0, 0.5, 0.1]
lr_schedule = schedules.PiecewiseConstantDecay(boundaries, values)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)https://stackoverflow.com/questions/63213252
复制相似问题