我想量化一个DenseNet模型。我使用的是Tensorflow 2.4。
import tensorflow_model_optimization as tfmot
model = tf.keras.applications.DenseNet121(include_top=True,weights=None,input_tensor=None,input_shape=None,pooling=None,classes=1000)
quantize_model = tfmot.quantization.keras.quantize_model
model = quantize_model(model)但我得到了以下信息:
'tensorflow.python.keras.layers.normalization_v2.BatchNormalization'>:不支持Layer conv2_block1__bn:quantize_annotate_layer接口传递一个tfmot.quantization.keras.QuantizeConfig实例来量化该层。
有没有办法让我做到这一点。我不能更改keras代码。
发布于 2021-01-29 03:39:48
在您的情况下,您需要单独量化BatchNormalization层。
如果您在此Quantization TF Guide中看到下面的示例代码片段,则可以使用DefaultDenseQuantizeConfig来处理此问题。希望这篇指南能帮助你解决这个问题。
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()https://stackoverflow.com/questions/65626935
复制相似问题