首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Keras VGG16微调

Keras VGG16微调
EN

Stack Overflow用户
提问于 2017-04-13 07:48:37
回答 3查看 16.7K关注 0票数 8

有一个VGG16微调在keras博客上的例子,但我无法再现它。

更确切地说,下面的代码用于插入没有顶层的VGG16并冻结除最顶层以外的所有块:

代码语言:javascript
复制
WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'
weights_path = get_file('vgg16_weights.h5', WEIGHTS_PATH_NO_TOP)

model = Sequential()
model.add(InputLayer(input_shape=(150, 150, 3)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2'))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block5_maxpool'))

model.load_weights(weights_path)

for layer in model.layers:
    layer.trainable = False

for layer in model.layers[-4:]:
    layer.trainable = True
    print("Layer '%s' is trainable" % layer.name)  

接下来,创建一个带有单个隐藏层的顶级模型:

代码语言:javascript
复制
top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
top_model.load_weights('top_model.h5')

请注意,它以前接受过瓶颈特性的培训,就像博客文章中描述的那样。接下来,将这个顶级模型添加到基本模型并编译:

代码语言:javascript
复制
model.add(top_model)
model.compile(loss='binary_crossentropy',
              optimizer=SGD(lr=1e-4, momentum=0.9),
              metrics=['accuracy'])

最终,适合猫/狗的数据:

代码语言:javascript
复制
batch_size = 16

train_datagen = ImageDataGenerator(rescale=1./255,
                                   shear_range=0.2,
                                   zoom_range=0.2,
                                   horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

train_gen = train_datagen.flow_from_directory(
    TRAIN_DIR,
    target_size=(150, 150),
    batch_size=batch_size,
    class_mode='binary')

valid_gen = test_datagen.flow_from_directory(
    VALID_DIR,
    target_size=(150, 150),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_gen,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=nb_epoch,
    validation_data=valid_gen,
    validation_steps=nb_valid_samples // batch_size)

但是,当我试图适应时,我遇到了一个错误:

ValueError:检查模型目标时的错误:期望block5_maxpool具有4>维,但得到形状为(16,1)的数组

因此,基本模型中的最后一个池层似乎出了问题。或者我可能做错了什么,试图把基础模型和上面的模型连接起来。

有谁有类似的问题吗?或者有更好的方法来建立这样的“级联”模型?我将keras==2.0.0theano后端结合使用。

备注:我使用了gist和applications.VGG16实用程序中的示例,但是在连接模型时遇到了一些问题,我对keras功能API并不太熟悉。因此,我在这里提供的解决方案是最“成功”的一个,也就是说,它只在合适的阶段失败。

更新#1

好的,这里有一个关于我想做什么的小解释。首先,我从VGG16生成瓶颈特性如下:

代码语言:javascript
复制
def save_bottleneck_features():
    datagen = ImageDataGenerator(rescale=1./255)
    model = applications.VGG16(include_top=False, weights='imagenet')

    generator = datagen.flow_from_directory(
        TRAIN_DIR,
        target_size=(150, 150),
        batch_size=batch_size,
        class_mode=None,
        shuffle=False)    
    print("Predicting train samples..")
    bottleneck_features_train = model.predict_generator(generator, nb_train_samples)
    np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_train)

    generator = datagen.flow_from_directory(
        VALID_DIR,
        target_size=(150, 150),
        batch_size=batch_size,
        class_mode=None,
        shuffle=False)
    print("Predicting valid samples..")
    bottleneck_features_valid = model.predict_generator(generator, nb_valid_samples)
    np.save(open('bottleneck_features_valid.npy', 'w'), bottleneck_features_valid)

然后,我创建了一个顶级模型,并对其进行了如下特性的培训:

代码语言:javascript
复制
def train_top_model():
    train_data = np.load(open('bottleneck_features_train.npy'))
    train_labels = np.array([0]*(nb_train_samples / 2) + 
                            [1]*(nb_train_samples / 2))
    valid_data = np.load(open('bottleneck_features_valid.npy'))
    valid_labels = np.array([0]*(nb_valid_samples / 2) +
                            [1]*(nb_valid_samples / 2))
    model = Sequential()
    model.add(Flatten(input_shape=train_data.shape[1:]))  
    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
    model.fit(train_data, train_labels,
              nb_epoch=nb_epoch,
              batch_size=batch_size,
              validation_data=(valid_data, valid_labels),
              verbose=1)
    model.save_weights('top_model.h5')   

因此,基本上有两个经过训练的模型,一个是带有base_model权重的ImageNet模型,另一个是由瓶颈特性生成权重的top_model模型。我想知道怎么把它们连接起来?有可能吗?还是我做错了什么?因为正如我所看到的,@thomas的回答假设顶级模型不是单独训练的,而是立即附加到模型中。不确定我是否清楚,这里有一段博客的引语:

为了执行微调,所有的层都应该从经过适当训练的权重开始:例如,你不应该将一个随机初始化的完全连接的网络放在预先训练的卷积基础上。这是因为由随机初始化的权重触发的大梯度更新会破坏卷积基中的学习权重。在我们的例子中,这就是为什么我们首先训练顶级分类器,然后才开始微调它旁边的卷积权值。

EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2017-04-13 08:08:13

我认为vgg网描述的权重不符合您的模型,而错误就是由此产生的。无论如何,有一种更好的方法可以使用(https://keras.io/applications/#vgg16)中描述的网络本身来实现这一点。

你可以用:

代码语言:javascript
复制
base_model = keras.applications.vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=None)

实例化预先训练的vgg网。然后您可以冻结这些层并使用模型类实例化您自己的模型,如下所示:

代码语言:javascript
复制
x = base_model.output
x = Flatten()(x)
x = Dense(your_classes, activation='softmax')(x) #minor edit
new_model = Model(input=base_model.input, output=x)

若要组合底层网络和顶层网络,可以使用以下代码段。使用以下函数(输入层(https://keras.io/getting-started/functional-api-guide/) / load_model (https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model)和keras的functional ):

代码语言:javascript
复制
final_input = Input(shape=(3, 224, 224))
base_model = vgg...
top_model = load_model(weights_file)

x = base_model(final_input)
result = top_model(x)
final_model = Model(input=final_input, output=result)
票数 9
EN

Stack Overflow用户

发布于 2017-05-21 08:46:24

我认为你可以通过做这样的事情将两者连接起来:

代码语言:javascript
复制
#load vgg model
vgg_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(150, 150, 3))
print('Model loaded.')

#initialise top model
top_model = Sequential()
top_model.add(Flatten(input_shape=vgg_model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))


top_model.load_weights(top_model_weights_path)

# add the model on top of the convolutional base

model = Model(input= vgg_model.input, output= top_model(vgg_model.output))

此解决方案引用示例微调预先训练过的网络的顶层。完整的代码可以找到这里

票数 3
EN

Stack Overflow用户

发布于 2017-06-12 03:13:33

好的,我想Thomas和Gowtham发布了正确的(以及更简洁的答案),但是我想分享代码,我成功地运行了这些代码:

代码语言:javascript
复制
def train_finetuned_model(lr=1e-5, verbose=True):
    file_path = get_file('vgg16.h5', VGG16_WEIGHTS_PATH, cache_subdir='models')
    if verbose:
        print('Building VGG16 (no-top) model to generate bottleneck features.')

    vgg16_notop = build_vgg_16()
    vgg16_notop.load_weights(file_path)
    for _ in range(6):
        vgg16_notop.pop()
    vgg16_notop.compile(optimizer=RMSprop(lr=lr), loss='categorical_crossentropy', metrics=['accuracy'])    

    if verbose:
        print('Bottleneck features generation.')

    train_batches = get_batches('train', shuffle=False, class_mode=None, batch_size=BATCH_SIZE)
    train_labels = np.array([0]*1000 + [1]*1000)
    train_bottleneck = vgg16_notop.predict_generator(train_batches, steps=2000 // BATCH_SIZE)
    valid_batches = get_batches('valid', shuffle=False, class_mode=None, batch_size=BATCH_SIZE)
    valid_labels = np.array([0]*400 + [1]*400)
    valid_bottleneck = vgg16_notop.predict_generator(valid_batches, steps=800 // BATCH_SIZE)

    if verbose:
        print('Training top model on bottleneck features.')

    top_model = Sequential()
    top_model.add(Flatten(input_shape=train_bottleneck.shape[1:]))
    top_model.add(Dense(4096, activation='relu'))
    top_model.add(Dropout(0.5))
    top_model.add(Dense(4096, activation='relu'))
    top_model.add(Dropout(0.5))
    top_model.add(Dense(2, activation='softmax'))
    top_model.compile(optimizer=RMSprop(lr=lr), loss='categorical_crossentropy', metrics=['accuracy'])
    top_model.fit(train_bottleneck, to_categorical(train_labels),
                  batch_size=32, epochs=10,
                  validation_data=(valid_bottleneck, to_categorical(valid_labels)))

    if verbose:
        print('Concatenate new VGG16 (without top layer) with pretrained top model.')

    vgg16_fine = build_vgg_16()
    vgg16_fine.load_weights(file_path)
    for _ in range(6):
        vgg16_fine.pop()
    vgg16_fine.add(Flatten(name='top_flatten'))    
    vgg16_fine.add(Dense(4096, activation='relu'))
    vgg16_fine.add(Dropout(0.5))
    vgg16_fine.add(Dense(4096, activation='relu'))
    vgg16_fine.add(Dropout(0.5))
    vgg16_fine.add(Dense(2, activation='softmax'))
    vgg16_fine.compile(optimizer=RMSprop(lr=lr), loss='categorical_crossentropy', metrics=['accuracy'])

    if verbose:
        print('Loading pre-trained weights into concatenated model')

    for i, layer in enumerate(reversed(top_model.layers), 1):
        pretrained_weights = layer.get_weights()
        vgg16_fine.layers[-i].set_weights(pretrained_weights)

    for layer in vgg16_fine.layers[:26]:
        layer.trainable = False

    if verbose:
        print('Layers training status:')
        for layer in vgg16_fine.layers:
            print('[%6s] %s' % ('' if layer.trainable else 'FROZEN', layer.name))        

    vgg16_fine.compile(optimizer=RMSprop(lr=1e-6), loss='binary_crossentropy', metrics=['accuracy'])

    if verbose:
        print('Train concatenated model on dogs/cats dataset sample.')

    train_datagen = ImageDataGenerator(rescale=1./255,
                                       shear_range=0.2,
                                       zoom_range=0.2,
                                       horizontal_flip=True)
    test_datagen = ImageDataGenerator(rescale=1./255)
    train_batches = get_batches('train', gen=train_datagen, class_mode='categorical', batch_size=BATCH_SIZE)
    valid_batches = get_batches('valid', gen=test_datagen, class_mode='categorical', batch_size=BATCH_SIZE)
    vgg16_fine.fit_generator(train_batches, epochs=100,
                             steps_per_epoch=2000 // BATCH_SIZE,
                             validation_data=valid_batches,
                             validation_steps=800 // BATCH_SIZE)
    return vgg16_fine 

这是有点冗长,并使所有的东西手动(即复制权从预先训练的层到串联模型),但它的工作,或多或少。

虽然我发布的这段代码的准确性很低(大约70%),但这是另一回事。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/43386463

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档