首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >无法在需要grad的张量上调用numpy()。请改用tensor.detach().numpy()

无法在需要grad的张量上调用numpy()。请改用tensor.detach().numpy()
EN

Stack Overflow用户
提问于 2021-09-16 13:07:07
回答 1查看 598关注 0票数 0

这是我关于这个问题的第二个问题。最初,我遇到了日志错误:'numpy.ndarray‘对象没有’AttributeError‘属性。然后U12-Forward帮我解决了这个问题。但是一个新的问题出现了。

代码语言:javascript
复制
import torch
import numpy as np
import matplotlib.pyplot as plt

x = torch.tensor([[5., 10.],
                  [1., 2.]], requires_grad=True)
var_history = []
fn_history = []
alpha = 0.001
optimizer = torch.optim.SGD([x], lr=alpha)

def function_parabola(variable):
    return np.prod(np.log(np.log(variable + 7)))


def make_gradient_step(function, variable):
    function_result = function(variable)
    function_result.backward()
    optimizer.step()
    optimizer.zero_grad()


for i in range(500):
    var_history.append(x.data.numpy().copy())
    fn_history.append(function_parabola(x).data.cpu().detach().numpy().copy())
    make_gradient_step(function_parabola, x)
print(x)
def show_contours(objective,
                  x_lims=[-10.0, 10.0],
                  y_lims=[-10.0, 10.0],
                  x_ticks=100,
                  y_ticks=100):
    x_step = (x_lims[1] - x_lims[0]) / x_ticks
    y_step = (y_lims[1] - y_lims[0]) / y_ticks
    X, Y = np.mgrid[x_lims[0]:x_lims[1]:x_step, y_lims[0]:y_lims[1]:y_step]
    res = []
    for x_index in range(X.shape[0]):
        res.append([])
        for y_index in range(X.shape[1]):
            x_val = X[x_index, y_index]
            y_val = Y[x_index, y_index]
            res[-1].append(objective(np.array([[x_val, y_val]]).T))
    res = np.array(res)
    plt.figure(figsize=(7,7))
    plt.contour(X, Y, res, 100)
    plt.xlabel('$x_1$')
    plt.ylabel('$x_2$')
show_contours(function_parabola)
plt.scatter(np.array(var_history)[:,0], np.array(var_history)[:,1], s=10, c='r');
plt.show()
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-09-16 14:22:08

修改function_parabola()以对PyTorch张量进行操作,并利用原始PyTorch操作的等价物,如下所示:

代码语言:javascript
复制
import torch
import numpy as np
import matplotlib.pyplot as plt

x = torch.tensor([[5., 10.],
                  [1., 2.]], requires_grad=True)
var_history = []
fn_history = []
alpha = 0.001
optimizer = torch.optim.SGD([x], lr=alpha)

def function_parabola(variable):
    return (torch.prod(torch.log(torch.log(torch.as_tensor(variable + 7)))))


def make_gradient_step(function, variable):
    function_result = function(variable)
    function_result.backward()
    optimizer.step()
    optimizer.zero_grad()


for i in range(500):
    var_history.append(x.data.numpy().copy())
    fn_history.append(function_parabola(x).data.cpu().detach().numpy())
    make_gradient_step(function_parabola, x)
print(x)
def show_contours(objective,
                  x_lims=[-10.0, 10.0],
                  y_lims=[-10.0, 10.0],
                  x_ticks=100,
                  y_ticks=100):
    x_step = (x_lims[1] - x_lims[0]) / x_ticks
    y_step = (y_lims[1] - y_lims[0]) / y_ticks
    X, Y = np.mgrid[x_lims[0]:x_lims[1]:x_step, y_lims[0]:y_lims[1]:y_step]
    res = []
    for x_index in range(X.shape[0]):
        res.append([])
        for y_index in range(X.shape[1]):
            x_val = X[x_index, y_index]
            y_val = Y[x_index, y_index]
            res[-1].append(objective(np.array([[x_val, y_val]]).T))
    res = np.array(res)
    plt.figure(figsize=(7,7))
    plt.contour(X, Y, res, 100)
    plt.xlabel('$x_1$')
    plt.ylabel('$x_2$')
show_contours(function_parabola)
plt.scatter(np.array(var_history)[:,0], np.array(var_history)[:,1], s=10, c='r');

plt.show()
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69209038

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档