首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >TypeError:'Adam‘对象不可调用

TypeError:'Adam‘对象不可调用
EN

Stack Overflow用户
提问于 2022-10-28 16:25:00
回答 1查看 133关注 0票数 2

我正在尝试使用这种方法来更新基于每一组层的学习速率的预训练模型。我应用了相同的方法,但是我得到了这个错误

代码语言:javascript
复制
TypeError: 'Adam' object is not callable

这是训练循环的代码:

代码语言:javascript
复制
from typing import Dict, List, Tuple

def train_step(model: torch.nn.Module, 
               dataloader: torch.utils.data.DataLoader, 
               loss_fn: torch.nn.Module, 
               optimizer: torch.optim.Optimizer,
               device: torch.device):

  # Put model in train mode
  model.train()

  # Setup train loss and train accuracy values
  train_loss, train_acc = 0, 0

  # Loop through data loader data batches
  for batch, (X, y) in enumerate(dataloader):

      # Send data to target device
      X, y = X.to(device), y.to(device)

      # 1. Forward pass
      y_pred = model(X)

      # 2. Calculate and accumulate loss
      loss = loss_fn(y_pred, y)
      train_loss += loss.item() 

      # 3. Optimizer zero grad
      optimizer.step()
      optimizer.zero_grad()

      # 4. Loss backward
      loss.backward()

      # 5. Optimizer step
      # optimizer.step()

      # Calculate and accumulate accuracy metric across all batches
      y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1)
      train_acc += (y_pred_class == y).sum().item()/len(y_pred)

  # Adjust metrics to get average loss and accuracy per batch 
  train_loss = train_loss / len(dataloader)
  train_acc = train_acc / len(dataloader)
  return train_loss, train_acc
代码语言:javascript
复制
NUM_EPOCHS = 100

# Recreate an instance of TinyVGG
model_0 = model
device = device

# Setup loss function and optimizer
loss_fn = nn.CrossEntropyLoss(weight = class_weights)
# params= model_0.parameters()

optimizer = torch.optim.Adam(parameters_1) 
# optimizer = torch.optim.SGD(model_0.parameters(), lr=0.01, momentum=0.9)
# optimizer = torch.optim.SGD(model_0.parameters(), lr=0.001, momentum=0.9, weight_decay=1e-6)
# scheduler = StepLR(optimizer, step_size=20, gamma=0.5)

SaveBestModel()

# Start the timer
from timeit import default_timer as timer 
start_time = timer()

# Train model_0 
model_0_results = train(model=model_0, 
                        train_dataloader=train_dataloader,
                        test_dataloader=test_dataloader,
                        optimizer=optimizer,
                        loss_fn=loss_fn,
                        epochs=NUM_EPOCHS,
                        device=device)

我试着把模型的参数和学习速度与原始的Adam相同,但这是行不通的。

EN

回答 1

Stack Overflow用户

发布于 2022-10-29 13:11:50

我假设parameters_1是一些参数。试着打开它:optimizer = torch.optim.Adam(**parameters_1)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/74238235

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档