我有一个相当重要的函数,我想与autograd区分开来,但我不是一个足够的numpy向导,无法计算出如何在没有数组赋值的情况下做到这一点。
我也很抱歉,我不得不让这个例子变得令人难以置信的做作和毫无意义,以便能够独立运行。我正在使用的实际代码是非线性有限元,并试图计算复杂非线性系统的雅可比。
import autograd.numpy as anp
from autograd import jacobian
def alpha(x):
return anp.exp(-(x - 10) ** 2) / (x + 1)
def f(x):
# Matrix getting constructed
k = anp.zeros((x.shape[0], x.shape[0]))
# loop over some random 3 dimensional vectors
for element in anp.random.randint(0, x.shape[0], (x.shape[0], 3)):
# select 3 values from x
x_ijk = anp.array([[x[i] for i in element]])
norm = anp.linalg.norm(
x_ijk @ anp.vstack((element, element)).transpose()
)
# make some matrix from the element
m = element.reshape(3, 1) @ element.reshape(1, 3)
# alpha is an arbitrary differentiable function R -> R
alpha_value = alpha(norm)
# combine m matricies into k scaling by alpha_value
n = m.shape[0]
for i in range(n):
for j in range(n):
k[element[i], element[j]] += m[i, j] * alpha_value
return k @ x
print(jacobian(f)(anp.random.rand(10)))
# And course we get an error
# k[element[i], element[j]] += m[i, j] * alpha_value
# ValueError: setting an array element with a sequence.我不太理解这条消息,因为没有发生类型错误。我想它一定是从任务中来的。
写完上面的代码后,我简单地切换到了PyTorch,代码运行得很好。但我还是更喜欢使用autograd
#pytorch version
import torch
from torch.autograd.gradcheck import zero_gradients
def alpha(x):
return torch.exp(x)
def f(x):
# Matrix getting constructed
k = torch.zeros((x.shape[0], x.shape[0]))
# loop over some random 3 dimensional vectors
for element in torch.randint(0, x.shape[0], (x.shape[0], 3)):
# select 3 values from x
x_ijk = torch.tensor([[1. if n == e else 0 for n in range(len(x))] for e in element]) @ x
norm = torch.norm(
x_ijk @ torch.stack((torch.tanh(element.float() + 4), element.float() - 4)).t()
)
m = torch.rand(3, 3)
# alpha is an arbitrary differentiable function R -> R
alpha_value = alpha(norm)
n = m.shape[0]
for i in range(n):
for j in range(n):
k[element[i], element[j]] += m[i, j] * alpha_value
print(k)
return k @ x
x = torch.rand(4, requires_grad=True)
print(x, '\n')
y = f(x)
print(y, '\n')
grads = []
for val in y:
val.backward(retain_graph=True)
grads.append(x.grad.clone())
zero_gradients(x)
if __name__ == '__main__':
print(torch.stack(grads))发布于 2020-01-24 16:31:13
在Autograd和JAX中,不允许执行数组索引分配。有关这一点的部分解释,请参阅JAX gotchas。
PyTorch允许此功能。如果你想在autograd中运行你的代码,你必须找到一种方法来删除有问题的行k[element[i], element[j]] += m[i, j] * alpha_value。如果您可以在JAX中运行代码(其语法本质上与autograd相同,但功能更多),那么看起来jax.ops对执行这种索引任务很有帮助。
https://stackoverflow.com/questions/59185031
复制相似问题