【问题标题】:Error: Tensorflow preprocessing layers not converting to Tensorflow lite错误:Tensorflow 预处理层未转换为 Tensorflow lite
【发布时间】:2021-06-10 19:56:53
【问题描述】:

使用示例 https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers

我用自己的数据创建了一个模型。我想以 Tensorflow lite 格式保存它。我保存为 SavedModel,但是在转换时,我遇到了很多错误代码。我遇到的最后一个错误代码;

WARNING:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
INFO:tensorflow:Assets written to: /tmp/test_saved_model/assets
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    212       model body, the input/output will be quantized as well.
--> 213     inference_type: Data type for the activations. The default value is int8.
    214     enable_numeric_verify: Experimental. Subject to change. Bool indicating

4 frames
Exception: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
    tf.AddV2 {device = ""}
    tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
    tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
    tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}


During handling of the above exception, another exception occurred:

ConverterError                            Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    214     enable_numeric_verify: Experimental. Subject to change. Bool indicating
    215       whether to add NumericVerify ops into the debug mode quantized model.
--> 216 
    217   Returns:
    218     Quantized model in serialized form (e.g. a TFLITE model) with floating-point

ConverterError: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
    tf.AddV2 {device = ""}
    tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
    tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
    tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}

代码;


# Save the model into temp directory
export_dir = "/tmp/test_saved_model"


tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
#save model 
tflite_model_files = pathlib.Path('/tmp/save_model_tflite.tflite')
tflite_model_file.write_bytes(tflite_model)

这个错误代码的原因是什么?我的目标是在应用程序中嵌入这个带有 react native 的模型。谢谢。

【问题讨论】:

    标签: tensorflow tensorflow-lite keras-layer converters data-preprocessing


    【解决方案1】:

    查看您的跟踪信息,您似乎有一些 HashTable 操作。您需要设置converter.allow_custom_ops = True 才能转换此模型。

    export_dir = "/content/test_saved_model"
    
    
    tf.saved_model.save(model, export_dir)
    # Convert the model into TF Lite.
    converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
    
    converter.allow_custom_ops = True
    
    tflite_model = converter.convert()
    
    #save model 
    tflite_model_files = pathlib.Path('/content/save_model_tflite.tflite')
    tflite_model_files.write_bytes(tflite_model)
    

    【讨论】:

    • 谢谢! tflite 文件已保存。我会尝试再检查一次。 tflite 文件的准确性是否有可能降低?我不知道这个。
    • 是的,准确度会有所下降。换句话说,它将不如原始模型准确。
    • 它被转换了,但我遇到了这样的反馈。这是一个问题吗? WARNING:absl:Please change your code to save with tf.keras.models.save_model model.save, and confirm that the file "keras.metadata" exists in the export directory. In the future, Keras will only load the SavedModels that have this file. In other words, tf.saved_model.save` 将不再写入可恢复为 Keras 模型的 SavedModels。对于 DEVS:如果您在类中覆盖 _tracking_metadata,此属性已用于将元数据保存在保存模型。`
    • 现在应该没问题,另一方面,您可以使用tf.keras.models.save_model 将您的模型保存为 TensorFlow SavedModel。这就是警告实际上所说的。
    猜你喜欢
    • 1970-01-01
    • 2018-05-07
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2021-04-16
    • 2021-07-02
    相关资源
    最近更新 更多