我有一个问题,让盗梦空间V3作为一个功能提取器与一个二进制分类器在Pytorch。我更新了初始空间中的主网和辅助网,使其具有二进制类(如在https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html中所做的)
但我搞错了
#Parameters for Inception V3
num_classes= 2
model_ft = models.inception_v3(pretrained=True)
# set_parameter_requires_grad(model_ft, feature_extract)
#handle auxilliary net
num_ftrs = model_ft.AuxLogits.fc.in_features
model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)
#handle primary net
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs,num_classes)
# input_size = 299
#simulate data input
x = torch.rand([64, 3, 299, 299])
#create model with inception backbone
backbone = model_ft
num_filters = backbone.fc.in_features
layers = list(backbone.children())[:-1]
feature_extractor = nn.Sequential(*layers)
# use the pretrained model to classify damage 2 classes
num_target_classes = 2
classifier = nn.Linear(num_filters, num_target_classes)
feature_extractor.eval()
with torch.no_grad():
representations = feature_extractor(x).flatten(1)
x = classifier(representations)但我明白这个错误
RuntimeError Traceback (most recent call last)
<ipython-input-54-c2be64b8a99e> in <module>()
11 feature_extractor.eval()
12 with torch.no_grad():
---> 13 representations = feature_extractor(x)
14 x = classifier(representations)
9 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
442 _pair(0), self.dilation, self.groups)
443 return F.conv2d(input, weight, bias, self.stride,
--> 444 self.padding, self.dilation, self.groups)
445
446 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [64, 2]在我将类更新到2(当它是1000)之前,我得到了相同的错误,但是有64,1000。这种创建主干和添加分类器的方法适用于Resnet,但这里不行。我认为这是因为辅助网状结构,但不知道如何更新它来处理双重输出?谢谢
发布于 2022-05-27 05:23:31
在行feature_extracture中通过children函数继承layers = list(backbone.children())[:-1]只会将模块从backbone带到feature_extracture,而不是forward函数中的操作。
让我们看看下面的代码:
class Example(torch.nn.Module):
def __init__(self):
super().__init__()
self.avg = torch.nn.AdaptiveAvgPool2d((1,1))
self.linear = torch.nn.Linear(10, 1)
def forward(self, x):
out = self.avg(x)
out = out.squeeze()
out = self.linear(out)
return out
x = torch.randn(5, 10, 12, 12)
model = Example()
y = model(x) # work well
new_model = torch.nn.Sequential(*list(model.children()))
y = new_model(x) # error模块model和new_model有相同的块,但不是相同的工作方式。在new_module中,池层的输出还没有压缩,因此线性输入的形状违反了它的假设,从而导致了误差。
在您的示例中,最后两个注释是多余的,这就是它返回错误的原因,您确实在InceptionV3模块中的model_ft.fc = nn.Linear(num_ftrs,num_classes)行中创建了一个新的model_ft.fc = nn.Linear(num_ftrs,num_classes)。因此,替换最后一个代码,因为下面的代码应该工作得很好:
with torch.no_grad():
x = model_ft(x)https://stackoverflow.com/questions/72398383
复制相似问题