我在试着跟着PyTorch guide to load models in C++走。
下面的示例代码可以工作:
import torch
import torchvision
# An instance of your model.
model = torchvision.models.resnet18()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)但是,当尝试其他网络时,例如squeezenet (或alexnet),我的代码失败:
sq = torchvision.models.squeezenet1_0(pretrained=True)
traced_script_module = torch.jit.trace(sq, example)
>> traced_script_module = torch.jit.trace(sq, example)
/home/fabio/.local/lib/python3.6/site-packages/torch/jit/__init__.py:642: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function.
Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 785] (3.1476082801818848 vs. 3.945478677749634) and 999 other locations (100.00%)
_check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace)发布于 2018-12-18 01:25:35
我刚刚发现从torchvision.models加载的模型在默认情况下处于训练模式。AlexNet和SqueezeNet都有Dropout层,如果在训练模式下,则使推理不确定。只需更改为eval模式即可修复此问题:
sq = torchvision.models.squeezenet1_0(pretrained=True)
sq.eval()
traced_script_module = torch.jit.trace(sq, example) https://stackoverflow.com/questions/53820175
复制相似问题