我试图加快yolov3 TF2和TensorRT的推理速度。我在tensorflow 2中使用TrtGraphConverter函数。
我的代码本质上是:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
tf.keras.backend.set_learning_phase(0)
converter = trt.TrtGraphConverter(
input_saved_model_dir="./tmp/yolosaved/",
precision_mode="FP16",
is_dynamic_op=True)
converter.convert()
saved_model_dir_trt = "./tmp/yolov3.trt"
converter.save(saved_model_dir_trt)
这会产生以下错误:
Traceback (most recent call last):
File "/home/pierre/Programs/anaconda3/envs/Deep2/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 427, in import_graph_def
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from conv2d/kernel:0 incompatible with expected resource.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pierre/Documents/GitHub/yolov3-tf2/tensorrt.py", line 23, in <module>
converter.save(saved_model_dir_trt)
File "/home/pierre/Programs/anaconda3/envs/Deep2/lib/python3.6/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 822, in save
super(TrtGraphConverter, self).save(output_saved_model_dir)
File "/home/pierre/Programs/anaconda3/envs/Deep2/lib/python3.6/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 432, in save
importer.import_graph_def(self._converted_graph_def, name="")
File "/home/pierre/Programs/anaconda3/envs/Deep2/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/pierre/Programs/anaconda3/envs/Deep2/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 431, in import_graph_def
raise ValueError(str(e))
ValueError: Input 1 of node StatefulPartitionedCall was passed float from conv2d/kernel:0 incompatible with expected resource.
这是否意味着我的一些节点无法转换?在这种情况下,为什么我的代码在.save步骤中出错?
发布于 2019-07-29 23:30:46
最后,我用以下代码解决了这个问题。我也从tf 2.0.-beta0 0切换到tf-夜-gpu-2.0-预览
params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode='FP16',
is_dynamic_op=True)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=saved_model_dir,
conversion_params=params)
converter.convert()
saved_model_dir_trt = "/tmp/model.trt"
converter.save(saved_model_dir_trt)
谢谢你的帮忙
发布于 2019-07-29 12:10:05
当您使用TensorRT时,请记住,您的模型体系结构中可能存在不受支持的层。这里有TensorRT支持矩阵供您参考。YOLO包含许多未实现的自定义层,如"yolo层“。
因此,如果要将YOLO转换为TensorRT优化模型,则需要从其他方法中进行选择。
发布于 2019-07-26 13:07:28
可能有点牵强,但你使用的是哪个gpu呢?我知道在某些体系结构中只支持precision_mode="FP16",比如Pascal (tx2系列)和图灵(~2080系列)。从TF2到trt与fp16的移植已经取得了很好的效果。
https://stackoverflow.com/questions/57117397
复制相似问题