我有一组相当复杂的模型,我正在进行培训,我正在寻找一种方法来保存和加载模型优化器状态。“教练员模型”由其他几种“体重模型”的不同组合组成,其中一些有共同的重量,有些则根据训练者的不同而冻结重量,这是一个有点复杂的例子,但简而言之,我无法在停止和开始我的训练时使用model.save('model_file.h5')和keras.models.load_model('model_file.h5')。
如果培训已经完成,使用model.load_weights('weight_file.h5')可以很好地测试我的模型,但是如果我试图继续使用这种方法对模型进行培训,损失甚至不会回到它的最后一个位置。我已经读过,这是因为没有使用此方法保存优化器状态,这是有意义的。但是,我需要一种方法来保存和加载我的教练机模型的优化器的状态。似乎keras曾经有一个model.optimizer.get_sate()和model.optimizer.set_sate()来完成我所追求的目标,但现在似乎不再是这样了(至少对于Adam优化器来说是这样)。对于当前的Keras,还有其他解决方案吗?
发布于 2018-03-27 04:29:56
您可以从load_model和save_model函数中提取重要的行。
用于保存优化器状态(在save_model中)
# Save optimizer weights.
symbolic_weights = getattr(model.optimizer, 'weights')
if symbolic_weights:
optimizer_weights_group = f.create_group('optimizer_weights')
weight_values = K.batch_get_value(symbolic_weights)用于加载优化器状态,在load_model中
# Set optimizer weights.
if 'optimizer_weights' in f:
# Build train function (to get weight updates).
if isinstance(model, Sequential):
model.model._make_train_function()
else:
model._make_train_function()
# ...
try:
model.optimizer.set_weights(optimizer_weight_values)结合上面的几行,下面是一个例子:
X, y = np.random.rand(100, 50), np.random.randint(2, size=100)
x = Input((50,))
out = Dense(1, activation='sigmoid')(x)
model = Model(x, out)
model.compile(optimizer='adam', loss='binary_crossentropy')
model.fit(X, y, epochs=5)
Epoch 1/5
100/100 [==============================] - 0s 4ms/step - loss: 0.7716
Epoch 2/5
100/100 [==============================] - 0s 64us/step - loss: 0.7678
Epoch 3/5
100/100 [==============================] - 0s 82us/step - loss: 0.7665
Epoch 4/5
100/100 [==============================] - 0s 56us/step - loss: 0.7647
Epoch 5/5
100/100 [==============================] - 0s 76us/step - loss: 0.7638model.save_weights('weights.h5')
symbolic_weights = getattr(model.optimizer, 'weights')
weight_values = K.batch_get_value(symbolic_weights)
with open('optimizer.pkl', 'wb') as f:
pickle.dump(weight_values, f)x = Input((50,))
out = Dense(1, activation='sigmoid')(x)
model = Model(x, out)
model.compile(optimizer='adam', loss='binary_crossentropy')
model.load_weights('weights.h5')
model._make_train_function()
with open('optimizer.pkl', 'rb') as f:
weight_values = pickle.load(f)
model.optimizer.set_weights(weight_values)model.fit(X, y, epochs=5)
Epoch 1/5
100/100 [==============================] - 0s 674us/step - loss: 0.7629
Epoch 2/5
100/100 [==============================] - 0s 49us/step - loss: 0.7617
Epoch 3/5
100/100 [==============================] - 0s 49us/step - loss: 0.7611
Epoch 4/5
100/100 [==============================] - 0s 55us/step - loss: 0.7601
Epoch 5/5
100/100 [==============================] - 0s 49us/step - loss: 0.7594发布于 2020-07-25 13:57:28
对于那些不使用model.compile,而是执行自动微分来手动应用optimizer.apply_gradients梯度的人,我想我有一个解决方案。
首先,保存优化器权重:np.save(path, optimizer.get_weights())
然后,当您准备重新加载优化器时,通过对计算梯度的变量大小的张量列表调用optimizer.apply_gradients,向新实例化的优化器显示它将更新的权重的大小。在设置优化器的权重之后,设置模型的权重是非常重要的,因为基于动量的优化器(如Adam )会更新模型的权重,即使我们给出的梯度为零。
import tensorflow as tf
import numpy as np
model = # instantiate model (functional or subclass of tf.keras.Model)
# Get saved weights
opt_weights = np.load('/path/to/saved/opt/weights.npy', allow_pickle=True)
grad_vars = model.trainable_weights
# This need not be model.trainable_weights; it must be a correctly-ordered list of
# grad_vars corresponding to how you usually call the optimizer.
optimizer = tf.keras.optimizers.Adam(lrate)
zero_grads = [tf.zeros_like(w) for w in grad_vars]
# Apply gradients which don't do nothing with Adam
optimizer.apply_gradients(zip(zero_grads, grad_vars))
# Set the weights of the optimizer
optimizer.set_weights(opt_weights)
# NOW set the trainable weights of the model
model_weights = np.load('/path/to/saved/model/weights.npy', allow_pickle=True)
model.set_weights(model_weights)请注意,如果我们试图在第一次调用apply_gradients之前设置权重,则会引发一个错误,即优化器需要长度为零的权重列表。
发布于 2020-11-03 21:47:24
在完成AlexTrevi厚厚的回答后,可以避免重新调用model.set_weights,只需在应用梯度之前保存变量的状态,然后重新加载即可。这在从h5文件加载模型时很有用,而且看起来更干净(imo)。
保存/加载函数如下(再次感谢Alex ):
def save_optimizer_state(optimizer, save_path, save_name):
'''
Save keras.optimizers object state.
Arguments:
optimizer --- Optimizer object.
save_path --- Path to save location.
save_name --- Name of the .npy file to be created.
'''
# Create folder if it does not exists
if not os.path.exists(save_path):
os.makedirs(save_path)
# save weights
np.save(os.path.join(save_path, save_name), optimizer.get_weights())
return
def load_optimizer_state(optimizer, load_path, load_name, model_train_vars):
'''
Loads keras.optimizers object state.
Arguments:
optimizer --- Optimizer object to be loaded.
load_path --- Path to save location.
load_name --- Name of the .npy file to be read.
model_train_vars --- List of model variables (obtained using Model.trainable_variables)
'''
# Load optimizer weights
opt_weights = np.load(os.path.join(load_path, load_name)+'.npy', allow_pickle=True)
# dummy zero gradients
zero_grads = [tf.zeros_like(w) for w in model_train_vars]
# save current state of variables
saved_vars = [tf.identity(w) for w in model_train_vars]
# Apply gradients which don't do nothing with Adam
optimizer.apply_gradients(zip(zero_grads, model_train_vars))
# Reload variables
[x.assign(y) for x,y in zip(model_train_vars, saved_vars)]
# Set the weights of the optimizer
optimizer.set_weights(opt_weights)
returnhttps://stackoverflow.com/questions/49503748
复制相似问题