我正在构建一个神经网络,我不知道如何访问每个层的模型权重。
我试过了
model.input_size.weight代码:
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))我本来想得到重量的,但我得到了
“序列”对象没有属性“input_size”
发布于 2019-06-05 12:58:21
我尝试过许多方法,似乎唯一的方法就是通过传递OrderedDict来命名每个层
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))因此,要访问每个层的权重,我们需要用它自己独特的层名来调用它。
例如,访问第一层model.fc1.weight的权重
Parameter containing:
tensor([[-7.3584e-03, -2.3753e-02, -2.2565e-02, ..., 2.1965e-02,
1.0699e-02, -2.8968e-02],
[ 2.2930e-02, -2.4317e-02, 2.9939e-02, ..., 1.1536e-02,
1.9830e-02, -1.4294e-02],
[ 3.0891e-02, 2.5781e-02, -2.5248e-02, ..., -1.5813e-02,
6.1708e-03, -1.8673e-02],
...,
[-1.2596e-03, -1.2320e-05, 1.9106e-02, ..., 2.1987e-02,
-3.3817e-02, -9.4880e-03],
[ 1.4234e-02, 2.1246e-02, -1.0369e-02, ..., -1.2366e-02,
-4.7024e-04, -2.5259e-02],
[ 7.5356e-03, 3.4400e-02, -1.0673e-02, ..., 2.8880e-02,
-1.0365e-02, -1.2916e-02]], requires_grad=True)发布于 2020-02-01 20:27:08
如果您使用print(model)打印出模型,您将得到
Sequential(
(0): Linear(in_features=784, out_features=128, bias=True)
(1): ReLU()
(2): Linear(in_features=128, out_features=64, bias=True)
(3): ReLU()
(4): Linear(in_features=64, out_features=10, bias=True)
(5): Softmax(dim=1) )现在,您可以访问所有的层索引,这样您就可以通过model[4].weight获得第二个线性层的权重。
发布于 2019-06-04 14:43:31
根据这里的官方讨论论坛,您可以在nn.Sequential()中访问特定模块的权重
model.layer[0].weight # for accessing weights of first layer wrapped in nn.Sequential()https://stackoverflow.com/questions/56435961
复制相似问题