我正在尝试遵循https://www.tensorflow.org/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets中描述的微调步骤,以获得一个经过训练的二进制分割模型。
我创建了一个编码器-解码器,编码器的权重是MobileNetV2的权重,固定为encoder.trainable = False。然后,我定义了本教程中所说的解码器,并使用0.005的学习率训练了300个时期的网络。我得到了以下损失值和上一个时期的Jaccard指数:
Epoch 297/300
55/55 [==============================] - 85s 2s/step - loss: 0.2443 - jaccard_sparse3D: 0.5556 - accuracy: 0.9923 - val_loss: 0.0440 - val_jaccard_sparse3D: 0.3172 - val_accuracy: 0.9768
Epoch 298/300
55/55 [==============================] - 75s 1s/step - loss: 0.2437 - jaccard_sparse3D: 0.5190 - accuracy: 0.9932 - val_loss: 0.0422 - val_jaccard_sparse3D: 0.3281 - val_accuracy: 0.9776
Epoch 299/300
55/55 [==============================] - 78s 1s/step - loss: 0.2465 - jaccard_sparse3D: 0.4557 - accuracy: 0.9936 - val_loss: 0.0431 - val_jaccard_sparse3D: 0.3327 - val_accuracy: 0.9769
Epoch 300/300
55/55 [==============================] - 85s 2s/step - loss: 0.2467 - jaccard_sparse3D: 0.5030 - accuracy: 0.9923 - val_loss: 0.0463 - val_jaccard_sparse3D: 0.3315 - val_accuracy: 0.9740我存储了这个模型的所有权重,然后通过以下步骤计算微调:
model.load_weights('my_pretrained_weights.h5')
model.trainable = True
model.compile(optimizer=Adam(learning_rate=0.00001, name='adam'),
loss=SparseCategoricalCrossentropy(from_logits=True),
metrics=[jaccard, "accuracy"])
model.fit(training_generator, validation_data=(val_x, val_y), epochs=5,
validation_batch_size=2, callbacks=callbacks)突然之间,我的模型的性能比训练解码器时差了很多:
Epoch 1/5
55/55 [==============================] - 89s 2s/step - loss: 0.2417 - jaccard_sparse3D: 0.0843 - accuracy: 0.9946 - val_loss: 0.0079 - val_jaccard_sparse3D: 0.0312 - val_accuracy: 0.9992
Epoch 2/5
55/55 [==============================] - 90s 2s/step - loss: 0.1920 - jaccard_sparse3D: 0.1179 - accuracy: 0.9927 - val_loss: 0.0138 - val_jaccard_sparse3D: 7.1138e-05 - val_accuracy: 0.9998
Epoch 3/5
55/55 [==============================] - 95s 2s/step - loss: 0.2173 - jaccard_sparse3D: 0.1227 - accuracy: 0.9932 - val_loss: 0.0171 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 0.9999
Epoch 4/5
55/55 [==============================] - 94s 2s/step - loss: 0.2428 - jaccard_sparse3D: 0.1319 - accuracy: 0.9927 - val_loss: 0.0190 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 1.0000
Epoch 5/5
55/55 [==============================] - 97s 2s/step - loss: 0.1920 - jaccard_sparse3D: 0.1107 - accuracy: 0.9926 - val_loss: 0.0215 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 1.0000为什么会发生这种情况,有什么已知的原因吗?正常吗?提前谢谢你!
发布于 2021-03-07 00:46:57
好了,我发现了我做的不同之处,这使得它不需要编译。我没有设置encoder.trainable = False。我在下面的代码中所做的是等效的
for layer in encoder.layers:
layer.trainable=False然后训练你的模型。然后,您可以使用以下命令解冻编码器权重
for layer in encoder.layers:
layer.trainable=True您不需要重新编译模型。我对此进行了测试,结果与预期一致。您可以通过打印之前和之后的模型摘要进行验证,并查看可训练参数的数量。至于更改学习率,我发现最好使用keras回调ReduceLROnPlateau来根据验证损失自动调整学习率。我也推荐使用EarlyStopping回调,它可以监控验证,如果在‘耐心’的连续时期后损失不能减少,就会停止训练。设置restore_best_weights=True将加载具有最低验证损失的时期的权重,因此您不必保存然后重新加载权重。将epochs设置为一个较大的数字,以确保激活该回调。我使用的代码如下所示
es=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=3,
verbose=1, restore_best_weights=True)
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=1,
verbose=1)
callbacks=[es, rlronp]在model.fit set callbacks=callbacks中
https://stackoverflow.com/questions/66460418
复制相似问题