首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >每个时代预训练模型的跳跃计算特征

每个时代预训练模型的跳跃计算特征
EN

Stack Overflow用户
提问于 2021-12-05 17:22:56
回答 1查看 85关注 0票数 1

我习惯于使用tenserflow - keras,但现在我被迫开始处理Py手电的灵活性问题。然而,我似乎并没有找到专注于训练模型的分类层的py手电筒代码。这不是一种常见的做法吗?现在,我不得不等待每个时代相同数据的特征提取的计算。有什么办法可以避免吗?

代码语言:javascript
复制
# in tensorflow - keras : 
from tensorflow.keras.applications import vgg16, MobileNetV2, mobilenet_v2

# Load a pre-trained
pretrained_nn = MobileNetV2(weights='imagenet', include_top=False, input_shape=(Image_size, Image_size, 3))  

# Extract features of the training data only once
X = mobilenet_v2.preprocess_input(X) 
features_x = pretrained_nn.predict(X)

# Save features for later use
joblib.dump(features_x, "features_x.dat")  

# Create a model and add layers
model = Sequential()
model.add(Flatten(input_shape=features_x.shape[1:]))
model.add(Dense(100, activation='relu', use_bias=True))
model.add(Dense(Y.shape[1], activation='softmax', use_bias=False))
    
# Compile & train only the fully connected model
    
model.compile( loss="categorical_crossentropy", optimizer=keras.optimizers.Adam(learning_rate=0.001))
history = model.fit(  features_x, Y_train, batch_size=16, epochs=Epochs)
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-12-05 22:00:33

假设您已经具备了features_x的特性,您可以这样做来创建和训练模型:

代码语言:javascript
复制
# create a loader for the data
dataset = torch.utils.data.TensorDataset(features_x, Y_train)
loader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)

# define the classification model
in_features = features_x.flatten(1).size(1)
model = torch.nn.Sequential(
    torch.nn.Flatten(),
    torch.nn.Linear(in_features=in_features, out_features=100, bias=True),
    torch.nn.ReLU(),
    torch.nn.Linear(in_features=100, out_features=Y.shape[1], bias=False) # Softmax is handled by CrossEntropyLoss below
)
model.train()

# define the optimizer and loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_function = torch.nn.CrossEntropyLoss()

# training loop
for e in range(Epochs):
    for batch_x, batch_y in enumerate(loader):
        optimizer.zero_grad() # clear gradients from previous batch
        out = model(batch_x)  # forward pass
        loss = loss_function(out, batch_y) # compute loss
        loss.backward() # backpropagate, get gradients
        optimizer.step() # update model weights
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70236736

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档