首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >tf.compat.v1.train.exponential_decay:全局step =0

tf.compat.v1.train.exponential_decay:全局step =0
EN

Stack Overflow用户
提问于 2020-08-20 15:51:43
回答 1查看 905关注 0票数 0

为了了解如何同时实现具有指数衰减的ANN和具有恒定学习速率的ANN,我在这里查找了它:https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/exponential_decay

我有一些问题:

代码语言:javascript
复制
...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate,
global_step,
                                           100000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
    tf.compat.v1.train.GradientDescentOptimizer(learning_rate)
    .minimize(...my loss..., global_step=global_step)
)

当global_step设置为值为0的变量时,这并不意味着我们将没有衰变,因为

代码语言:javascript
复制
decayed_learning_rate = learning_rate *
                        decay_rate ^ (global_step / decay_steps)

因此,如果global_step= 0遵循decayed_learning_rate = learning_rate,这是正确的还是我在这里犯了一个错误?

此外,我对100000个步骤究竟指的是什么感到有点困惑。一步到底是什么?是否每次输入都通过网络完全输入并反向传播?

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-08-21 01:39:39

我希望这个例子能消除你的疑虑。

代码语言:javascript
复制
epochs = 10
global_step = tf.Variable(0, trainable=False, dtype= tf.int32)
starter_learning_rate = 1.0

for epoch in range(epochs):
    print("Starting Epoch {}/{}".format(epoch+1,epochs))
    for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
        
        with tf.GradientTape() as tape:
            logits = model(x_batch_train, training=True)
            loss_value = loss_fn(y_batch_train, logits)
            
        grads = tape.gradient(loss_value, model.trainable_weights)
        
        learning_rate = tf.compat.v1.train.exponential_decay(
                    starter_learning_rate,
                    global_step,
                    100000, 
                    0.96
        )
        
        
        optimizer(learning_rate=learning_rate).apply_gradients(zip(grads, model.trainable_weights))
        print("Global Step: {}  Learning Rate: {}  Examples Processed: {}".format(global_step.numpy(), learning_rate(), (step + 1) * 100))
        global_step.assign_add(1)

输出:

代码语言:javascript
复制
Starting Epoch 1/10
Global Step: 0  Learning Rate: 1.0  Examples Processed: 100
Global Step: 1  Learning Rate: 0.9999996423721313  Examples Processed: 200
Global Step: 2  Learning Rate: 0.9999992251396179  Examples Processed: 300
Global Step: 3  Learning Rate: 0.9999988079071045  Examples Processed: 400
Global Step: 4  Learning Rate: 0.9999983906745911  Examples Processed: 500
Global Step: 5  Learning Rate: 0.9999979734420776  Examples Processed: 600
Global Step: 6  Learning Rate: 0.9999975562095642  Examples Processed: 700
Global Step: 7  Learning Rate: 0.9999971389770508  Examples Processed: 800
Global Step: 8  Learning Rate: 0.9999967217445374  Examples Processed: 900
Global Step: 9  Learning Rate: 0.9999963045120239  Examples Processed: 1000
Global Step: 10  Learning Rate: 0.9999958872795105  Examples Processed: 1100
Global Step: 11  Learning Rate: 0.9999954700469971  Examples Processed: 1200
Starting Epoch 2/10
Global Step: 12  Learning Rate: 0.9999950528144836  Examples Processed: 100
Global Step: 13  Learning Rate: 0.9999946355819702  Examples Processed: 200
Global Step: 14  Learning Rate: 0.9999942183494568  Examples Processed: 300
Global Step: 15  Learning Rate: 0.9999938607215881  Examples Processed: 400
Global Step: 16  Learning Rate: 0.9999934434890747  Examples Processed: 500
Global Step: 17  Learning Rate: 0.999993085861206  Examples Processed: 600
Global Step: 18  Learning Rate: 0.9999926686286926  Examples Processed: 700
Global Step: 19  Learning Rate: 0.9999922513961792  Examples Processed: 800
Global Step: 20  Learning Rate: 0.9999918341636658  Examples Processed: 900
Global Step: 21  Learning Rate: 0.9999914169311523  Examples Processed: 1000
Global Step: 22  Learning Rate: 0.9999909996986389  Examples Processed: 1100
Global Step: 23  Learning Rate: 0.9999905824661255  Examples Processed: 1200

现在,如果你保持你的全球步长为0。从上述代码中删除增量操作。输出:

起始时代1/10

代码语言:javascript
复制
Global Step: 0  Learning Rate: 1.0  Examples Processed: 100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 200
Global Step: 0  Learning Rate: 1.0  Examples Processed: 300
Global Step: 0  Learning Rate: 1.0  Examples Processed: 400
Global Step: 0  Learning Rate: 1.0  Examples Processed: 500
Global Step: 0  Learning Rate: 1.0  Examples Processed: 600
Global Step: 0  Learning Rate: 1.0  Examples Processed: 700
Global Step: 0  Learning Rate: 1.0  Examples Processed: 800
Global Step: 0  Learning Rate: 1.0  Examples Processed: 900
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1000
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1200
Starting Epoch 2/10
Global Step: 0  Learning Rate: 1.0  Examples Processed: 100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 200
Global Step: 0  Learning Rate: 1.0  Examples Processed: 300
Global Step: 0  Learning Rate: 1.0  Examples Processed: 400
Global Step: 0  Learning Rate: 1.0  Examples Processed: 500
Global Step: 0  Learning Rate: 1.0  Examples Processed: 600
Global Step: 0  Learning Rate: 1.0  Examples Processed: 700
Global Step: 0  Learning Rate: 1.0  Examples Processed: 800
Global Step: 0  Learning Rate: 1.0  Examples Processed: 900
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1000
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1100
Global Step: 0  Learning Rate: 1.0  Examples Processed: 1200

建议-而不是使用tf.compat.v1.train.exponential_decay,使用tf.keras.optimizers.schedules.ExponentialDecay.这就是最简单的例子。

代码语言:javascript
复制
def create_model1():
    initial_learning_rate = 0.01
    lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
        initial_learning_rate,
        decay_steps=100000,
        decay_rate=0.96,
        staircase=True)
    model = tf.keras.Sequential()
    model.add(tf.keras.Input(shape=(5,)))
    model.add(tf.keras.layers.Dense(units = 6, 
                                    activation='relu', 
                                    name = 'd1'))
    model.add(tf.keras.layers.Dense(units = 2, activation='softmax', name = 'O2'))
    
    model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])
    
    return model


model = create_model1()
model.fit(x, y, batch_size = 100, epochs = 100)

您还可以使用回调(如tf.keras.callbacks.LearningRateScheduler )来实现衰减。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63508796

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档