我正在尝试训练我的神经网络,使图中每条边的权重都是10。我从生成随机点(inp)开始,并使用weight = 1使每个相邻的点(使用idx)都有一条边。然后,如果两个邻接点已经有了一条边,边的权重将被发送到NN,该NN输出要添加到它的附加权重。
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
import torch
import torch.nn as nn
import networkx as nx
import matplotlib.pyplot as plt
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(1,3)
self.fc2 = nn.Linear(3,1)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
g = nx.DiGraph()
model = Net()
optimizer = optim.Adam(model.parameters(), lr = 1e-3)
def training(n_iter):
for epoch in range(n_iter):
print(epoch)
inp = torch.randint(0,10,(20,))
idx = 0
while idx < len (inp) - 1:
if g.has_edge(inp[idx].item(), inp[idx+1].item()): #edge exist
edge_weight = g[inp[idx].item()][inp[idx+1].item()]["weight"]
edge_weight_tensor = torch.tensor([edge_weight]).float() #to tensor
added_edge_weight = model(edge_weight_tensor) #value from network
g[inp[idx].item()][inp[idx+1].item()]["weight"] += added_edge_weight
idx +=1
else:
g.add_edge(inp[idx].item(),inp[idx+1].item(), weight = 1)
idx +=1
edges = g.edges()
weights = [g[u][v]['weight'] for u,v in edges]
optimizer.zero_grad()
loss_list = [w for w in weights if not isinstance(w, int)] #only take tensors
try:
loss_tensors = (torch.stack(loss_list, dim=0)-10)
loss_square = torch.square(loss_tensors)
loss = torch.sum(loss_square)
print(loss)
except RuntimeError: #no tensors - hence create a 0 loss
loss = torch.tensor(0.0, requires_grad = True)
loss.backward(retain_graph=True)
optimizer.step()
return weights
weights = training(5)
#plot
plt.figure(figsize=(6,6))
pos = nx.spring_layout(g, k = 0.5)
nx.draw(g, with_labels=True, node_color='skyblue', font_weight='bold', width=weights, pos=pos)我的问题是,我不确定梯度是否可以以这种方式传播,而且看起来我不能将NN权重添加到边的权重中-得到以下错误:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).发布于 2021-05-13 05:18:59
PyTorch几何是专门为在图形对象上实现PyTorch方法而设计的。具体地说,它的utils.from_networkx功能允许从networkx图转换为PyTorch几何对象,这可能会传播梯度。
https://stackoverflow.com/questions/67508925
复制相似问题