我刚开始深入学习,我已经建立了一个图形卷积网络。我用了5次交叉验证。在把train_loss (blue)和validate_loss (orange)画在一起后,我得到了这个孩子。
正如你所看到的,从validate_loss,的曲线趋势来看,我的网络似乎学不到什么东西。(我猜数据?GCN框架?学习率?)
,你们能专门帮我找出窃听器吗?
我会很感激的!如果你不明白我的意思,请告诉我。
class Scorer(nn.Module):
"""
Three conv_layers and two fc_layers with Dropout
"""
def __init__(self):
super(Scorer, self).__init__()
self.conv_layer1 = GraphConvNet(5, 64)
self.conv_layer2 = GraphConvNet(64, 128)
self.conv_layer3 = GraphConvNet(128, 256) # (I have tried delete conv_layer3)
self.fc_layer1 = nn.Linear(256, 128)
self.drop_layer1 = nn.Dropout(0.5)
self.fc_layer2 = nn.Linear(128, 64)
self.drop_layer2 = nn.Dropout(0.5)
self.out_layer = nn.Linear(64, 1)
def forward(self, NormLap, feat):
h = self.conv_layer1(NormLap, feat)
h = F.leaky_relu(h)
h = self.conv_layer2(NormLap, h)
h = F.leaky_relu(h)
h = self.conv_layer3(NormLap, h)
h = F.leaky_relu(h)
h = self.fc_layer1(h)
h = self.drop_layer1(h)
h = F.leaky_relu(h)
h = self.fc_layer2(h)
h = self.drop_layer2(h)
h = F.leaky_relu(h)
h = self.out_layer(h)
h = F.leaky_relu(h)下面是我的网络和参数:
# parameter setting
learning_rate = 0.001 # (I have tried 1e-1, 1e-2)
weight_decay = 1e-3 # (I have tried 1e-4)
epochs = 500
batch_size = 50 # (I have tried 30)
model = Scorer()
loss_func = nn.MSELoss()
optimizer = th.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)发布于 2020-07-20 07:02:24
这就是火车和验证损失应该做的事情。随着时间的推移,损失会减少;这就是优化器试图做的事情。train_loss在valid_loss水平升高或停滞后继续下降,表明该模型在大约100年代以后开始过度拟合。MSE0.3对您的应用程序是好的还是坏的,完全取决于应用程序,但是是的,优化器是很好的。
请查看此资源以了解如何解释损失曲线:https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/
“从validate_loss的曲线趋势来看,我的网络似乎学不到什么东西”--如果你解释(在很多细节中:)为什么你这么想,为了得到更好的答案,它会有所帮助。除了你看到的以外,你还期望看到什么?我看同样的图表,在我看来,你的网络正在学习建模数据,预测你想要预测的东西。
https://stackoverflow.com/questions/62988773
复制相似问题