首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >具有自定义层的定制模型的量化(完整的int 8)

具有自定义层的定制模型的量化(完整的int 8)
EN

Stack Overflow用户
提问于 2022-05-24 11:10:40
回答 1查看 225关注 0票数 3

嗨,我有一个自定义模型,它要求每个层都要量化int8。

我有一个名为CustomLayer的自定义层

培训后的量化(工程):

代码语言:javascript
复制
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model)
converter.optimizations = [tf.lite.Optimize.q]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
print("converting")
tflite_model_bytes = converter.convert()
print("converted")
model_file_name = 'tflite_model.tflite'

# Save the model.
with open(model_file_name, 'wb') as f:
    f.write(tflite_model_bytes)

量化意识培训方法:

我使用默认的quantise配置。

代码语言:javascript
复制
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
    # Configure how to quantize weights.
    def get_weights_and_quantizers(self, layer):
      return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]

    # Configure how to quantize activations.
    def get_activations_and_quantizers(self, layer):
      return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]

    def set_quantize_weights(self, layer, quantize_weights):
      # Add this line for each item returned in `get_weights_and_quantizers`
      # , in the same order
      layer.kernel = quantize_weights[0]

    def set_quantize_activations(self, layer, quantize_activations):
      # Add this line for each item returned in `get_activations_and_quantizers`
      # , in the same order.
      layer.activation = quantize_activations[0]

    # Configure how to quantize outputs (may be equivalent to activations).
    def get_output_quantizers(self, layer):
      return []

    def get_config(self):
      return {}

并试图使用

代码语言:javascript
复制
with quantize_scope({
    "DefaultDenseQuantizeConfig": DefaultDenseQuantizeConfig,
    "CustomLayer": CustomLayer
}):

    q_aware_model = tfmot.quantization.keras.quantize_model(tflite_model)

我知道这个错误:

RuntimeError:层custom_layer_1:不受支持.您可以通过将tfmot.quantization.keras.QuantizeConfig实例传递给quantize_annotate_layer API来量化这一层。

所以我试着注释我的图层方法

注释层:

我尝试在我的模型中注释我的层,使用如下:

代码语言:javascript
复制
with quantize_scope({
    "DefaultDenseQuantizeConfig": DefaultDenseQuantizeConfig,
    "CustomLayer": CustomLayer
}):
    def apply_quantization_to_layer(layer):
      return tfmot.quantization.keras.quantize_annotate_layer(layer, DefaultDenseQuantizeConfig())
    
    annotated_model = tf.keras.models.clone_model(
        tflite_model,
        clone_function=apply_quantization_to_layer,
    )

    tfmot.quantization.keras.quantize_apply(annotated_model)

AttributeError:“CustomLayer”对象没有属性“内核”

此错误发生在

代码语言:javascript
复制
def get_weights_and_quantizers(self, layer):
      return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]

get_activations_and_quantizers for layer.activation中也存在同样的问题。

EN

回答 1

Stack Overflow用户

发布于 2022-05-24 12:50:12

好的,我想我找到了一个解决方案,因为我不再收到任何错误,它似乎是训练成功的,并且它似乎成功地保存为一个tflite文件。

我假设的情况是,当get_weights_and_quantizers返回一个列表时,每个项目都是(tensorFromLayer, quantizerYouWantToUse),然后使用quantizerYouWantToUsetensorFromLayer进行量化。在set_quantize_weights中,它使用量化的张量返回列表,然后为层设置重置。

这是我想出的一个解决方案:

代码语言:javascript
复制
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):

    # List all of your weights
    weights = {
        "kernel": LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False)
    }

    # List of all your activations
    activations = {
        "activation": MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False)
    }

    # Configure how to quantize weights.
    def get_weights_and_quantizers(self, layer):
        output = []
        for attribute, quantizer in self.weights.items():
            if hasattr(layer, attribute):
                output.append((getattr(layer, attribute), quantizer))

        return output

    # Configure how to quantize activations.
    def get_activations_and_quantizers(self, layer):
        output = []
        for attribute, quantizer in self.activations.items():
            if hasattr(layer, attribute):
                output.append((getattr(layer, attribute), quantizer))

        return output

    def set_quantize_weights(self, layer, quantize_weights):
        # Add this line for each item returned in `get_weights_and_quantizers`
        # , in the same order

        count = 0
        for attribute in self.weights.keys():
            if hasattr(layer, attribute):
                setattr(layer, attribute, quantize_weights[count])
                count += 1

    def set_quantize_activations(self, layer, quantize_activations):
        # Add this line for each item returned in `get_activations_and_quantizers`
        # , in the same order.
        count = 0
        for attribute in self.activations.keys():
            if hasattr(layer, attribute):
                setattr(layer, attribute, quantize_activations[count])
                count += 1

    # Configure how to quantize outputs (may be equivalent to activations).
    def get_output_quantizers(self, layer):
      return []

    def get_config(self):
      return {}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/72361837

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档