下面我编写了一些代码,它接受一个预先训练过的模型作为参数(vgg、resnet、densenet等),并以ReLU状态返回模型为'False‘。它是在测试了许多不同的特定架构之后编写的。
我想用一种更紧凑的方式重写它,因为这在我看来并不是最理想的。然而,我不是一个开发人员,也没有太多的编码经验。你能帮个忙吗?
def ReLU_inplace_to_False (model):
for module in model._modules.values():
if isinstance(module, nn.ReLU):
module.inplace = False
try:
for layer in module:
if isinstance(layer, nn.ReLU):
layer.inplace = False
try:
for sublayer in layer._modules.values():
if isinstance(sublayer, nn.ReLU):
sublayer.inplace = False
try:
for subsublayer in sublayer._modules.values():
if isinstance(subsublayer, nn.ReLU):
subsublayer.inplace = False
try:
for subsubsublayer in subsublayer._modules.values():
if isinstance(subsubsublayer, nn.ReLU):
subsubsublayer.inplace = False
except:
pass
except:
pass
except:
pass
except:
pass
return model发布于 2022-10-19 19:55:53
这需要一个递归的解决方案。
def ReLU_inplace_to_False(module):
for layer in module._modules.values():
if isinstance(layer, nn.ReLU):
layer.inplace = False
ReLU_inplace_to_False(layer)https://stackoverflow.com/questions/74124725
复制相似问题