我已经建立了一个Keras模型,有两个输入,我想用SNPE在我的手机上预测。我已经成功地转换了它,这只是我现在遇到麻烦的C++代码。我可以用任何形状的一维数组对一个输入的模型进行预测,但我现在有了一个模型,它的大小为1。
所以在Keras中,预测如下所示:model.predict([np.array([.4]), np.array([.6])])
我必须预测的SNPE代码:
void init_model(){
zdl::DlSystem::Runtime_t runt=checkRuntime();
initializeSNPE(runt);
}
float run_model(float a, float b){
std::vector<float> inputVec;
std::vector<float> inputVec2;
inputVec.push_back(a);
inputVec2.push_back(b);
std::unique_ptr<zdl::DlSystem::ITensor> inputTensor = loadInputTensor(snpe, inputVec);
std::unique_ptr<zdl::DlSystem::ITensor> inputTensor2 = loadInputTensor(snpe, inputVec2); // what do I do with this?
zdl::DlSystem::ITensor* oTensor = executeNetwork(snpe, inputTensor);
return returnOutput(oTensor);
}我使用的功能是从SNPE的网站修改的。它适用于我以前对单个数组的预测:
zdl::DlSystem::Runtime_t checkRuntime()
{
static zdl::DlSystem::Version_t Version = zdl::SNPE::SNPEFactory::getLibraryVersion();
static zdl::DlSystem::Runtime_t Runtime;
std::cout << "SNPE Version: " << Version.asString().c_str() << std::endl; //Print Version number
std::cout << "\ntest";
if (zdl::SNPE::SNPEFactory::isRuntimeAvailable(zdl::DlSystem::Runtime_t::GPU)) {
Runtime = zdl::DlSystem::Runtime_t::GPU;
} else {
Runtime = zdl::DlSystem::Runtime_t::CPU;
}
return Runtime;
}
void initializeSNPE(zdl::DlSystem::Runtime_t runtime) {
std::unique_ptr<zdl::DlContainer::IDlContainer> container;
container = zdl::DlContainer::IDlContainer::open("/path/to/model.dlc");
//printf("loaded model\n");
int counter = 0;
zdl::SNPE::SNPEBuilder snpeBuilder(container.get());
snpe = snpeBuilder.setOutputLayers({})
.setRuntimeProcessor(runtime)
.setUseUserSuppliedBuffers(false)
.setPerformanceProfile(zdl::DlSystem::PerformanceProfile_t::HIGH_PERFORMANCE)
.build();
}
std::unique_ptr<zdl::DlSystem::ITensor> loadInputTensor(std::unique_ptr<zdl::SNPE::SNPE> &snpe, std::vector<float> inputVec) {
std::unique_ptr<zdl::DlSystem::ITensor> input;
const auto &strList_opt = snpe->getInputTensorNames();
if (!strList_opt) throw std::runtime_error("Error obtaining Input tensor names");
const auto &strList = *strList_opt;
const auto &inputDims_opt = snpe->getInputDimensions(strList.at(0));
const auto &inputShape = *inputDims_opt;
input = zdl::SNPE::SNPEFactory::getTensorFactory().createTensor(inputShape);
std::copy(inputVec.begin(), inputVec.end(), input->begin());
return input;
}
float returnOutput(const zdl::DlSystem::ITensor* tensor) {
float op = *tensor->cbegin();
return op;
}
zdl::DlSystem::ITensor* executeNetwork(std::unique_ptr<zdl::SNPE::SNPE>& snpe,
std::unique_ptr<zdl::DlSystem::ITensor>& input) {
static zdl::DlSystem::TensorMap outputTensorMap;
snpe->execute(input.get(), outputTensorMap);
zdl::DlSystem::StringList tensorNames = outputTensorMap.getTensorNames();
const char* name = tensorNames.at(0); // only should the first
auto tensorPtr = outputTensorMap.getTensor(name);
return tensorPtr;
}但我不知道如何将两个输入张量与executeNetwork函数结合起来。任何帮助都将不胜感激。
发布于 2020-02-21 10:24:02
您可以使用zdl::DlSystem::TensorMap并将其设置为执行函数。
zdl::DlSystem::TensorMap inputTensorMap;
zdl::DlSystem::TensorMap outputTensorMap;
zdl::DlSystem::ITensor *inputTensor1;
zdl::DlSystem::ITensor *inputTensor2;
inputTensorMap.add("input_1", inputTensor1);
inputTensorMap.add("input_2", inputTensor2);
model->execute(inputTensorMap, outputTensorMap);请注意,您必须在inputTensorMap之后进行迭代,并使用 delete 自己删除。
https://stackoverflow.com/questions/58121729
复制相似问题