因此,我创建了一个for循环,以便运行各种批处理大小,其中每个循环将打开和关闭海王星运行。第一次运行良好,但以下运行,准确性没有记录到海王星,并且python不抛出一个错误?有人能想到问题出在哪里吗?
for i in range(len(percentage)):
run = neptune.init(
project="xxx",
api_token="xxx",
)
epochs = 600
batch_perc = percentage[i]
lr = 0.001
sb = 64 #round((43249*batch_perc)*0.00185)
params = {
'lr': lr,
'bs': sb,
'epochs': epochs,
'batch %': batch_perc
}
run['parameters'] = params
torch.manual_seed(12345)
td = 43249 * batch_perc
vd = 0.1*(43249 - td) + td
train_dataset = dataset[:round(td)]
val_dataset = dataset[round(td):round(vd)]
test_dataset = dataset[round(vd):]
print(f'Number of training graphs: {len(train_dataset)}')
run['train'] = len(train_dataset)
print(f'Number of validation graphs: {len(val_dataset)}')
run['val'] = len(val_dataset)
print(f'Number of test graphs: {len(test_dataset)}')
run['test'] = len(test_dataset)
train_loader = DataLoader(train_dataset, batch_size=sb, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=sb, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False)
model = GCN(hidden_channels=64).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
criterion = torch.nn.CrossEntropyLoss()
for epoch in range(1, epochs):
train()
train_acc = test(train_loader)
run['training/batch/acc'].log(train_acc)
val_acc = test(val_loader)
run['training/batch/val'].log(val_acc)发布于 2022-09-26 09:25:20
这里的王子,
尝试使用停()方法来终止上一次运行,因为目前您正在创建新的run对象,而没有杀死它们,这可能会导致一些问题。
for i in range(len(percentage)):
run = neptune.init(
project="xxx",
api_token="xxx",
)
run['parameters'] = params
run['train'] = len(train_dataset)
run['val'] = len(val_dataset)
run['test'] = len(test_dataset)
...
for epoch in range(1, epochs):
...
run['training/batch/acc'].log(train_acc)
run['training/batch/val'].log(val_acc)
run.stop()https://stackoverflow.com/questions/73833935
复制相似问题