首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何在TF2.1中保存/加载用于微调/迁移学习的部分模型?

如何在TF2.1中保存/加载用于微调/迁移学习的部分模型?
EN

Stack Overflow用户
提问于 2020-03-25 18:58:44
回答 1查看 659关注 0票数 0

我想构建自己的基本模型,并使用大数据集对其进行训练。训练结束后,我保存了基本模型。我有另一个自定义模型,我想从基础模型加载前两层的权重。我应该如何在Tensorflow 2.1.0中实现它,谢谢。

示例代码:

代码语言:javascript
复制
import os
os.environ["CUDA_VISIBLE_DEVICES"]="" 
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

class BaseModel():
    def __init__(self):
        inputs = keras.Input(shape=(32, 32, 3))
        x = inputs
        x = layers.Conv2D(32, 3, padding='same', activation=tf.nn.relu)(x)
        x = layers.MaxPool2D()(x)
        x = layers.Conv2D(64, 3, padding='same', activation=tf.nn.relu)(x)

        x = layers.Flatten()(x)

        x = layers.Dense(500, activation=tf.nn.relu)(x)

        outputs = layers.Dense(1000, activation=tf.nn.softmax)(x)

        self.model = keras.Model(inputs=inputs, outputs=outputs)

    def __call__(self, inputs):
        return self.model(inputs)

bm = BaseModel()  # the model for pretraining
bm.model.save_weights('base_model') # save the pretrained model


class MyModel():
    def __init__(self):
        inputs = keras.Input(shape=(32, 32, 3))
        x = inputs
        x = layers.Conv2D(32, 3, padding='same', activation=tf.nn.relu)(x)
        x = layers.MaxPool2D()(x)
        x = layers.Conv2D(64, 3, padding='same', activation=tf.nn.relu)(x)

        x = layers.Conv2D(128, 3, padding='same', activation=tf.nn.relu)(x)

        x = layers.Flatten()(x)

        x = layers.Dense(1000, activation=tf.nn.relu)(x)

        outputs = layers.Dense(10, activation=tf.nn.softmax)(x)

        self.model = keras.Model(inputs=inputs, outputs=outputs)

    def __call__(self, inputs):
        return self.model(inputs)


mm = MyModel()  # the model for my customized applications
mm.model.load_weights('base_model')  # load the pretrained model with the first two conv layers

# further fine-tuning or transfer learning 
EN

回答 1

Stack Overflow用户

发布于 2020-05-04 06:39:25

我已经使用了TF网站上提供的自定义模型来演示这个想法。使用子类化模型与Keras Sequential和functional模型几乎没有不同。我已经使用Subclassed模型来演示这个想法,如下所示。

代码语言:javascript
复制
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

基本模型

代码语言:javascript
复制
class ThreeLayerMLP(keras.Model):

  def __init__(self, name=None):
    super(ThreeLayerMLP, self).__init__(name=name)
    self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
    self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
    self.pred_layer = layers.Dense(10, name='predictions')

  def call(self, inputs):
    x = self.dense_1(inputs)
    x = self.dense_2(x)
    return self.pred_layer(x)

def get_model():
  return ThreeLayerMLP(name='3_layer_mlp')

base_model = get_model()

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255

base_model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              optimizer=keras.optimizers.RMSprop())
history = base_model.fit(x_train, y_train,
                    batch_size=64,
                    epochs=1)
#saving weights
base_model.save_weights('./base_model_weights', save_format='tf')

自定义模型

代码语言:javascript
复制
class MyCustomModel(keras.Model):

  def __init__(self, name=None):
    super(MyCustomModel, self).__init__(name=name)
    self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
    self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
    self.pred_layer = layers.Dense(10, name='predictions')

  def call(self, inputs):
    x = self.dense_1(inputs)
    x = self.dense_2(x)
    return self.pred_layer(x)

def get_custom_model():
  return MyCustomModel(name='my_custom_model')

my_custom_model = get_custom_model()

my_custom_model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                  optimizer=keras.optimizers.RMSprop())

当保存的模型在模型中具有任何custom_objects或自定义图层时,将导入以下步骤

代码语言:javascript
复制
# This initializes the variables used by the optimizers,
# as well as any stateful metric variables
my_custom_model.train_on_batch(x_train[:1], y_train[:1])

# Load the state of the old model (to load weights for all layers)
# my_custom_model.load_weights('path_to_my_weights')

layer_dict = dict([(layer.name, layer) for layer in base_model.layers])
print(layer_dict)

# my_custom_model.trainable = True
# loading the weights from base_model
for layer in my_custom_model.layers:
  layer_name = layer.name
  #print(layer.name)
  layer.set_weights(layer_dict[layer_name].get_weights())
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/60847420

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档