首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Pytorch vs.:Pytorch模型严重过度

Pytorch vs.:Pytorch模型严重过度
EN

Stack Overflow用户
提问于 2018-04-29 02:21:28
回答 2查看 3K关注 0票数 32

几天来,我一直在尝试用pytorch复制我的keras训练结果。无论我做什么,pytorch模型都会比keras更早、更强地适应验证集。对于pytorch,我使用来自https://github.com/Cadene/pretrained-models.pytorch的相同XCeption代码。

数据加载、增强、验证、训练计划等是等效的。我是不是漏掉了什么明显的东西?在某个地方一定有一个普遍的问题。我尝试了数千个不同的模块星座,但似乎没有任何东西能接近keras训练。有人能帮帮忙吗?

Keras模型: val准确率> 90%

代码语言:javascript
复制
# base model
base_model = applications.Xception(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))

# top model
x = base_model.output
x = GlobalMaxPooling2D()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(4, activation='softmax')(x)

# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)

# Compile model
from keras import optimizers
adam = optimizers.Adam(lr=0.0001)
model.compile(loss='categorical_crossentropy', 
optimizer=adam, metrics=['accuracy'])

# LROnPlateau etc. with equivalent settings as pytorch

Pytorch模型: val准确率~81%

代码语言:javascript
复制
from xception import xception
import torch.nn.functional as F

# modified from https://github.com/Cadene/pretrained-models.pytorch
class XCeption(nn.Module):
    def __init__(self, num_classes):
        super(XCeption, self).__init__()

        original_model = xception(pretrained="imagenet")

        self.features=nn.Sequential(*list(original_model.children())[:-1])
        self.last_linear = nn.Sequential(
             nn.Linear(original_model.last_linear.in_features, 512),
             nn.ReLU(),
             nn.Dropout(p=0.5),
             nn.Linear(512, num_classes)
        )

    def logits(self, features):
        x = F.relu(features)
        x = F.adaptive_max_pool2d(x, (1, 1))
        x = x.view(x.size(0), -1)
        x = self.last_linear(x)
        return x

    def forward(self, input):
        x = self.features(input)
        x = self.logits(x)
        return x 

device = torch.device("cuda")
model=XCeption(len(class_names))
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
    model = nn.DataParallel(model)
model.to(device)

criterion = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.Adam(model.parameters(), lr=0.0001)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)

非常感谢!

更新:设置:

代码语言:javascript
复制
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)

model = train_model(model, train_loader, val_loader, 
                        criterion, optimizer, scheduler, 
                        batch_size, trainmult=8, valmult=10, 
                        num_epochs=200, epochs_top=0)

干净的训练功能:

代码语言:javascript
复制
def train_model(model, train_loader, val_loader, criterion, optimizer, scheduler, batch_size, trainmult=1, valmult=1, num_epochs=None, epochs_top=0):
  for epoch in range(num_epochs):                        
    for phase in ['train', 'val']:
        running_loss = 0.0
        running_acc = 0
        total = 0
        # Iterate over data.
        if phase=="train":
            model.train(True)  # Set model to training mode
            for i in range(trainmult):
                for data in train_loader:
                    # get the inputs
                    inputs, labels = data
                    inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
                    # zero the parameter gradients
                    optimizer.zero_grad()
                    # forward
                    outputs = model(inputs) # notinception
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)
                    # backward + optimize only if in training phase
                    loss.backward()
                    optimizer.step()
                    # statistics                      
                    total += labels.size(0)
                    running_loss += loss.item()*labels.size(0)
                    running_acc += torch.sum(preds == labels)
                    train_loss=(running_loss/total)
                    train_acc=(running_acc.double()/total)
        else:
            model.train(False)  # Set model to evaluate mode
            with torch.no_grad():
                for i in range(valmult):
                    for data in val_loader:
                        # get the inputs
                        inputs, labels = data
                        inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
                        # zero the parameter gradients
                        optimizer.zero_grad()
                        # forward
                        outputs = model(inputs)
                        _, preds = torch.max(outputs, 1)
                        loss = criterion(outputs, labels.data)
                        # statistics
                        total += labels.size(0)
                        running_loss += loss.item()*labels.size(0)
                        running_acc += torch.sum(preds == labels)
                        val_loss=(running_loss/total)
                        val_acc=(running_acc.double()/total)  
            scheduler.step(val_loss)
    return model
EN

回答 2

Stack Overflow用户

发布于 2020-03-05 07:16:06

这可能是因为您使用的权重初始化类型,否则这应该不会发生尝试在两个模型中使用相同的初始化器

票数 2
EN

Stack Overflow用户

发布于 2020-07-20 03:19:15

代码语言:javascript
复制
self.features=nn.Sequential(*list(original_model.children())[:-1])

您确定这行代码以完全相同的方式重新实例化您的模型吗?您使用的是NN.Sequential,而不是原始XCeption模型的转发函数。如果forward函数中有任何与使用nn.Sequential不完全相同的内容,它将不会再现相同的性能。

而不是将其包装在Sequential中,您只需更改以下内容

代码语言:javascript
复制
my_model = Xception()
# load weights before you change the architecture
my_model = load_weights(path_to_weights)
# overwrite the original's last_linear with your own
my_model.last_linear = nn.Sequential(
             nn.Linear(original_model.last_linear.in_features, 512),
             nn.ReLU(),
             nn.Dropout(p=0.5),
             nn.Linear(512, num_classes)
        )
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/50079735

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档