我已经在Keras和PyTorch中实现了一个微小的CNN样本。当我打印两个网络的摘要时,可训练参数的总数是相同的,但是用于批归一化的参数总数和参数数不匹配。
以下是CNN在Keras中的实现:
inputs = Input(shape = (64, 64, 1)). # Channel Last: (NHWC)
model = Conv2D(filters=32, kernel_size=(3, 3), padding='SAME', activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 1))(inputs)
model = BatchNormalization(momentum=0.15, axis=-1)(model)
model = Flatten()(model)
dense = Dense(100, activation = "relu")(model)
head_root = Dense(10, activation = 'softmax')(dense)为上述模式印制的摘要如下:
Model: "model_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) (None, 64, 64, 1) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 64, 64, 32) 320
_________________________________________________________________
batch_normalization_2 (Batch (None, 64, 64, 32) 128
_________________________________________________________________
flatten_3 (Flatten) (None, 131072) 0
_________________________________________________________________
dense_11 (Dense) (None, 100) 13107300
_________________________________________________________________
dense_12 (Dense) (None, 10) 1010
=================================================================
Total params: 13,108,758
Trainable params: 13,108,694
Non-trainable params: 64
_________________________________________________________________下面是PyTorch中相同模型体系结构的实现:
# Image format: Channel first (NCHW) in PyTorch
class CustomModel(nn.Module):
def __init__(self):
super(CustomModel, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3, 3), padding=1),
nn.ReLU(True),
nn.BatchNorm2d(num_features=32),
)
self.flatten = nn.Flatten()
self.fc1 = nn.Linear(in_features=131072, out_features=100)
self.fc2 = nn.Linear(in_features=100, out_features=10)
def forward(self, x):
output = self.layer1(x)
output = self.flatten(output)
output = self.fc1(output)
output = self.fc2(output)
return output以下是上述模型的总结结果:
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 32, 64, 64] 320
ReLU-2 [-1, 32, 64, 64] 0
BatchNorm2d-3 [-1, 32, 64, 64] 64
Flatten-4 [-1, 131072] 0
Linear-5 [-1, 100] 13,107,300
Linear-6 [-1, 10] 1,010
================================================================
Total params: 13,108,694
Trainable params: 13,108,694
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.02
Forward/backward pass size (MB): 4.00
Params size (MB): 50.01
Estimated Total Size (MB): 54.02
----------------------------------------------------------------正如您在上面的结果中所看到的,Keras中的批处理规范化比PyTorch有更多的参数(确切地说,是2倍)。那么,以上CNN架构有什么不同呢?如果它们是等价的,那么我在这里遗漏了什么呢?
发布于 2020-02-05 16:21:59
Keras将许多将在层中“保存/加载”的东西视为参数(权重)。
虽然这两个实现自然具有批的累积“均值”和“方差”,但是这些值不能用反向传播来训练。
然而,这些值是每批更新的,Keras将它们视为不可训练的权重,而PyTorch只是将它们隐藏起来。这里的术语“不可训练”的意思是“反向传播的不可训练的”,但并不意味着这些值被冻结了。
总的来说,它们是BatchNormalization层的4组“权重”。考虑选定的轴(默认为-1,层为size=32 )
scale (32) - trainableoffset (32) -可训练accumulated means (32) -不可训练,但更新了每个batchaccumulated std (32)不可训练,但更新了每批在Keras中这样做的好处是,当您保存层时,您还保存均值和方差值,就像您自动将所有其他权重保存在层中一样。当你加载该层时,这些重量一起加载。
https://stackoverflow.com/questions/60079783
复制相似问题