首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >修改和组合使用tensorflow对象检测API生成的两种不同的冻结图进行推理

修改和组合使用tensorflow对象检测API生成的两种不同的冻结图进行推理
EN

Stack Overflow用户
提问于 2020-01-03 05:39:46
回答 2查看 1.5K关注 0票数 4

我正在使用TensorFlow对象检测API,我已经为我的用例训练了两种不同的模型(SSD-mobilenet和FRCNN-inception v2)。目前,我的工作流程如下:

  1. 获取输入图像,使用SSD检测特定对象。
  2. 用步骤1生成的边框裁剪输入图像,然后将其调整为固定大小(例如200×300)。
  3. 将这幅裁剪和调整大小的图像提供给FRCNN inception v2,用于检测ROI.

中的较小对象。

目前,在推断时,当我加载两个独立的冻结图并按照步骤执行时,我将得到我想要的结果。但是,由于我的部署需求,我只需要一个冻结图。我是TensorFlow的新手,我想把图形和裁剪结合起来,并在两者之间进行调整。

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2020-01-07 11:33:54

谢谢,@matt和@Vedanshu的回应,这是更新的代码,工作良好,我的要求,请提供建议,如果它需要改进,因为我还在学习它。

代码语言:javascript
复制
# Dependencies
import tensorflow as tf
import numpy as np


# load graphs using pb file path
def load_graph(pb_file):
    graph = tf.Graph()
    with graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(pb_file, 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='') 
    return graph


# returns tensor dictionaries from graph
def get_inference(graph, count=0):
    with graph.as_default():
        ops = tf.get_default_graph().get_operations()
        all_tensor_names = {output.name for op in ops for output in op.outputs}
        tensor_dict = {}
        for key in ['num_detections', 'detection_boxes', 'detection_scores',
                    'detection_classes', 'detection_masks', 'image_tensor']:
            tensor_name = key + ':0' if count == 0 else '_{}:0'.format(count)
            if tensor_name in all_tensor_names:
                tensor_dict[key] = tf.get_default_graph().\
                                        get_tensor_by_name(tensor_name)
        return tensor_dict


# renames while_context because there is one while function for every graph
# open issue at https://github.com/tensorflow/tensorflow/issues/22162  
def rename_frame_name(graphdef, suffix):
    for n in graphdef.node:
        if "while" in n.name:
            if "frame_name" in n.attr:
                n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
                                                                           "while_context" + suffix).encode('utf-8')


if __name__ == '__main__':

    # your pb file paths
    frozenGraphPath1 = '...replace_with_your_path/some_frozen_graph.pb'
    frozenGraphPath2 = '...replace_with_your_path/some_frozen_graph.pb'

    # new file name to save combined model
    combinedFrozenGraph = 'combined_frozen_inference_graph.pb'

    # loads both graphs
    graph1 = load_graph(frozenGraphPath1)
    graph2 = load_graph(frozenGraphPath2)

    # get tensor names from first graph
    tensor_dict1 = get_inference(graph1)

    with graph1.as_default():

        # getting tensors to add crop and resize step
        image_tensor = tensor_dict1['image_tensor']
        scores = tensor_dict1['detection_scores'][0]
        num_detections = tf.cast(tensor_dict1['num_detections'][0], tf.int32)
        detection_boxes = tensor_dict1['detection_boxes'][0]

        # I had to add NMS becuase my ssd model outputs 100 detections and hence it runs out of memory becuase of huge tensor shape
        selected_indices = tf.image.non_max_suppression(detection_boxes, scores, 5, iou_threshold=0.5)
        selected_boxes = tf.gather(detection_boxes, selected_indices)

        # intermediate crop and resize step, which will be input for second model(FRCNN)
        cropped_img = tf.image.crop_and_resize(image_tensor,
                                               selected_boxes,
                                               tf.zeros(tf.shape(selected_indices), dtype=tf.int32),
                                               [300, 60] # resize to 300 X 60
                                               )
        cropped_img = tf.cast(cropped_img, tf.uint8, name='cropped_img')


    gdef1 = graph1.as_graph_def()
    gdef2 = graph2.as_graph_def()

    g1name = "graph1"
    g2name = "graph2"

    # renaming while_context in both graphs
    rename_frame_name(gdef1, g1name)
    rename_frame_name(gdef2, g2name)

    # This combines both models and save it as one
    with tf.Graph().as_default() as g_combined:

        x, y = tf.import_graph_def(gdef1, return_elements=['image_tensor:0', 'cropped_img:0'])

        z, = tf.import_graph_def(gdef2, input_map={"image_tensor:0": y}, return_elements=['detection_boxes:0'])

        tf.train.write_graph(g_combined, "./", combinedFrozenGraph, as_text=False)
票数 3
EN

Stack Overflow用户

发布于 2020-01-03 17:25:45

您可以使用input_mapimport_graph_def中将一个图的输出加载到另一个图中。另外,您还必须重命名while_context,因为每个图都有一个while函数。就像这样:

代码语言:javascript
复制
def get_frozen_graph(graph_file):
    """Read Frozen Graph file from disk."""
    with tf.gfile.GFile(graph_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    return graph_def

def rename_frame_name(graphdef, suffix):
    # Bug reported at https://github.com/tensorflow/tensorflow/issues/22162#issuecomment-428091121
    for n in graphdef.node:
        if "while" in n.name:
            if "frame_name" in n.attr:
                n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
                                                                           "while_context" + suffix).encode('utf-8')
...

l1_graph = tf.Graph()
with l1_graph.as_default():
    trt_graph1 = get_frozen_graph(pb_fname1)
    [tf_input1, tf_scores1, tf_boxes1, tf_classes1, tf_num_detections1] = tf.import_graph_def(trt_graph1, 
            return_elements=['image_tensor:0', 'detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])

    input1 = tf.identity(tf_input1, name="l1_input")
    boxes1 = tf.identity(tf_boxes1[0], name="l1_boxes")  # index by 0 to remove batch dimension
    scores1 = tf.identity(tf_scores1[0], name="l1_scores")
    classes1 = tf.identity(tf_classes1[0], name="l1_classes")
    num_detections1 = tf.identity(tf.dtypes.cast(tf_num_detections1[0], tf.int32), name="l1_num_detections")

...
# Make your output tensor 
tf_out = # your output tensor (here, crop the input image with the bounding box generated from step 1 and then resize it to a fixed size(e.g. 200 X 300).)
...

connected_graph = tf.Graph()

with connected_graph.as_default():
    l1_graph_def = l1_graph.as_graph_def()
    g1name = 'ved'
    rename_frame_name(l1_graph_def, g1name)
    tf.import_graph_def(l1_graph_def, name=g1name)

    ...

    trt_graph2 = get_frozen_graph(pb_fname2)
    g2name = 'level2'
    rename_frame_name(trt_graph2, g2name)
    [tf_scores, tf_boxes, tf_classes, tf_num_detections] = tf.import_graph_def(trt_graph2,
            input_map={'image_tensor': tf_out},
            return_elements=['detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])


#######
# Export the graph

with connected_graph.as_default():
    print('\nSaving...')
    cwd = os.getcwd()
    path = os.path.join(cwd, 'saved_model')
    shutil.rmtree(path, ignore_errors=True)
    inputs_dict = {
        "image_tensor": tf_input
    }
    outputs_dict = {
        "detection_boxes_l1": tf_boxes_l1,
        "detection_scores_l1": tf_scores_l1,
        "detection_classes_l1": tf_classes_l1,
        "max_num_detection": tf_max_num_detection,
        "detection_boxes_l2": tf_boxes_l2,
        "detection_scores_l2": tf_scores_l2,
        "detection_classes_l2": tf_classes_l2
    }
    tf.saved_model.simple_save(
        tf_sess_main, path, inputs_dict, outputs_dict
    )
    print('Ok')
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/59573686

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档