我尝试使用OpenVINO推理引擎来加速我的DL推理。它只处理一张图片。但我想创建一批两个图像,然后进行推断。这是我的代码:
InferenceEngine::Core core;
InferenceEngine::CNNNetwork network = core.ReadNetwork("path/to/model.xml");
InferenceEngine::InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;
std::string input_name = network.getInputsInfo().begin()->first;
InferenceEngine::DataPtr output_info = network.getOutputsInfo().begin()->second;
std::string output_name = network.getOutputsInfo().begin()->first;
InferenceEngine::ExecutableNetwork executableNetwork = core.LoadNetwork(network, "CPU");
InferenceEngine::InferRequest inferRequest = executableNetwork.CreateInferRequest();
std::string input_image_01 = "path/to/image_01.png";
cv::Mat image_01 = cv::imread(input_image_01 );
InferenceEngine::Blob::Ptr imgBlob_01 = wrapMat2Blob(image_01);
std::string input_image_02 = "path/to/image_02.png";
cv::Mat image_02 = cv::imread(input_image_02 );
InferenceEngine::Blob::Ptr imgBlob_02 = wrapMat2Blob(image_02);
InferenceEngine::BlobMap imgBlobMap;
std::pair<std::string, InferenceEngine::Blob::Ptr> pair01(input_image_01, imgBlob_01);
imgBlobMap.insert(pair01);
std::pair<std::string, InferenceEngine::Blob::Ptr> pair02(input_image_02, imgBlob_02);
imgBlobMap.insert(pair02);
inferRequest.SetInput(imgBlobMap);
inferRequest.StartAsync();
inferRequest.Wait(InferenceEngine::IInferRequest::WaitMode::RESULT_READY);
InferenceEngine::Blob::Ptr output = inferRequest.GetBlob(output_name);
std::vector<unsigned> class_results;
ClassificationResult cls(output, {"x", "y"}, 2, 3);
class_results = cls.getResults();不幸的是,我从命令中收到了以下错误消息
inferRequest.SetInput(imgBlobMap);C:\j\workspace\private-ci\ie\build-windows-vs2019@2\b\repos\openvino\inference-engine\src\plugin_api\cpp_interfaces/impl/ie_infer_request_internal.hpp:303 C:\Program Files (x86)\Intel\openvino_2021.3.394\inference_engine\include\details/ie_exception_conversion.hpp:66 NOT_FOUND找不到名称为'path/ to /image_02.png‘的输入或输出
我如何创建一批以上的图像,进行推理,并获得分类类别和置信度的信息?置信度和类是否位于GetBlob()的接收变量中?是否需要输出cls( ClassificationResult,{"x","y"},2,3);?
发布于 2021-06-23 18:44:19
我建议您查看OpenVINO在线文档中的Using Shape Inference文章,了解使用批处理的限制。它还指的是开放模型动物园smart_classroom_demo,其中动态批处理用于处理多个先前检测到的人脸。基本上,当您在模型中启用batch时,您的输入blob的内存缓冲区将分配给所有批量图像的空间,您的职责是从您的数据中批量填充每个图像的输入blob中的数据。您可以查看smart_classroom_demo的函数CnnDLSDKBase::InferBatch,它位于文件smart_classroom_demo/cpp/src/cnn.cpp,第51行。正如您所看到的,在num_imgs上的循环中,辅助函数matU8ToBlob使用图像current_batch_size的数据填充输入blob,然后为推断请求设置批处理大小并运行推理。
for (size_t batch_i = 0; batch_i < num_imgs; batch_i += batch_size) {
const size_t current_batch_size = std::min(batch_size, num_imgs - batch_i);
for (size_t b = 0; b < current_batch_size; b++) {
matU8ToBlob<uint8_t>(frames[batch_i + b], input, b);
}
if (config_.max_batch_size != 1)
infer_request_.SetBatch(current_batch_size);
infer_request_.Infer();发布于 2021-10-18 03:09:09
有一个类似的示例,它使用批处理输入作为OpenVINO中模型的输入。你可以参考下面的链接。
https://stackoverflow.com/questions/68032153
复制相似问题