【发布时间】:2021-04-20 09:48:28
【问题描述】:
我读过类似的问题,Tensorflow (TF2) quantization to full integer error with TFLiteConverter RuntimeError: Quantization not yet supported for op: 'CUSTOM'
但是它无法在 TF 2.4.1 中解决此问题。
我推荐这个 tensorflow 网站使用仅整数量化进行转换。
https://tensorflow.google.cn/lite/performance/post_training_integer_quant
但是,它返回此错误:
RuntimeError:操作尚不支持量化:'CUSTOM'。
代码:
import tensorflow as tf
import numpy as np
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_saved_model(model)
# This enables quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
# set the representative dataset for the converter so we can quantize the activations
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
#write the quantized tflite model to a file
with open('my_quant.tflite', 'wb') as f:
f.write(tflite_model)
如何解决这个问题?
谢谢
【问题讨论】:
-
我找到了这个链接:tensorflow.org/lite/guide/ops_compatibility。该帖子称“自定义”操作尚未准备好用于自定义模型。
标签: tensorflow-lite quantization