首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Pytorch RuntimeError:"host_softmax“未为”torch.cuda.LongTensor“实现

Pytorch RuntimeError:"host_softmax“未为”torch.cuda.LongTensor“实现
EN

Stack Overflow用户
提问于 2018-08-13 16:34:19
回答 3查看 13.2K关注 0票数 10

我正在使用pytorch来训练模型。但当它计算交叉熵损失时,我得到了一个运行时错误。

代码语言:javascript
复制
Traceback (most recent call last):
  File "deparser.py", line 402, in <module>
    d.train()
  File "deparser.py", line 331, in train
    total, correct, avgloss = self.train_util()
  File "deparser.py", line 362, in train_util
    loss = self.step(X_train, Y_train, correct, total)
  File "deparser.py", line 214, in step
    loss = nn.CrossEntropyLoss()(out.long(), y)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 862, in forward
    ignore_index=self.ignore_index, reduction=self.reduction)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 1550, in cross_entropy
    return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
  File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 975, in log_softmax
    return input.log_softmax(dim)
RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'

我认为这是因为.cuda()函数或torch.Floattorch.Long之间的转换。但我尝试了许多方法来通过.cpu()/.cuda().long()/.float()更改变量,但仍然不起作用。在google上搜索时找不到此错误消息。有人能帮我吗?谢谢!

这是代码原因错误:

代码语言:javascript
复制
def step(self, x, y, correct, total):
    self.optimizer.zero_grad()
    out = self.forward(*x)
    loss = nn.CrossEntropyLoss()(out.long(), y)
    loss.backward()
    self.optimizer.step()
    _, predicted = torch.max(out.data, 1)
    total += y.size(0)
    correct += int((predicted == y).sum().data)
    return loss.data

此函数step()通过以下方式调用:

代码语言:javascript
复制
def train_util(self):
    total = 0
    correct = 0
    avgloss = 0
    for i in range(self.step_num_per_epoch):
        X_train, Y_train = self.trainloader()
        self.optimizer.zero_grad()
        if torch.cuda.is_available():
            self.cuda()
            for i in range(len(X_train)):
                X_train[i] = Variable(torch.from_numpy(X_train[i]))
                X_train[i].requires_grad = False
                X_train[i] = X_train[i].cuda()
            Y_train = torch.from_numpy(Y_train)
            Y_train.requires_grad = False
            Y_train = Y_train.cuda()
        loss = self.step(X_train, Y_train, correct, total)
        avgloss+=float(loss)*Y_train.size(0)
        self.optimizer.step()
        if i%100==99:
            print('STEP %d, Loss: %.4f, Acc: %.4f'%(i+1,loss,correct/total))

    return total, correct, avgloss/self.data_len

输入数据X_train, Y_train = self.trainloader()一开始是numpy数组。

这是一个数据示例:

代码语言:javascript
复制
>>> X_train, Y_train = d.trainloader()
>>> X_train[0].dtype
dtype('int64')
>>> X_train[1].dtype
dtype('int64')
>>> X_train[2].dtype
dtype('int64')
>>> Y_train.dtype
dtype('float32')
>>> X_train[0]
array([[   0,    6,    0, ...,    0,    0,    0],
       [   0, 1944, 8168, ...,    0,    0,    0],
       [   0,  815,  317, ...,    0,    0,    0],
       ...,
       [   0,    0,    0, ...,    0,    0,    0],
       [   0,   23,    6, ...,    0,    0,    0],
       [   0,    0,  297, ...,    0,    0,    0]])
>>> X_train[1]
array([ 6,  7,  8, 21,  2, 34,  3,  4, 19, 14, 15,  2, 13,  3, 11, 22,  4,
   13, 34, 10, 13,  3, 48, 18, 16, 19, 16, 17, 48,  3,  3, 13])
>>> X_train[2]
array([ 4,  5,  8, 36,  2, 33,  5,  3, 17, 16, 11,  0,  9,  3, 10, 20,  1,
   14, 33, 25, 19,  1, 46, 17, 14, 24, 15, 15, 51,  2,  1, 14])
>>> Y_train
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       ...,
       [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
      dtype=float32)

尝试所有可能的组合:

案例1:

loss = nn.CrossEntropyLoss()(out, y)

我得到了:

RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'

案例2:

loss = nn.CrossEntropyLoss()(out.long(), y)

如上所述

案例3:

loss = nn.CrossEntropyLoss()(out.float(), y)

我得到了:

RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'

案例4:

loss = nn.CrossEntropyLoss()(out, y.long())

我得到了:

RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15

案例5:

loss = nn.CrossEntropyLoss()(out.long(), y.long())

我得到了:

RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'

案例6:

loss = nn.CrossEntropyLoss()(out.float(), y.long())

我得到了:

RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15

案例7:

loss = nn.CrossEntropyLoss()(out, y.float())

我得到了:

RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'

案例8:

loss = nn.CrossEntropyLoss()(out.long(), y.float())

我得到了:

RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'

案例9:

loss = nn.CrossEntropyLoss()(out.float(), y.float())

我得到了:

RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'

EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2018-08-21 14:49:12

我知道问题出在哪里。

y应为不带one-hot编码的torch.int64数据类型。CrossEntropyLoss()将使用one-hot自动编码(而out是预测的概率分布,类似于one-hot格式)。

它现在可以运行了!

票数 6
EN

Stack Overflow用户

发布于 2020-12-09 09:00:02

在我的例子中,这是因为我翻转了targetslogits,因为logit显然不是torch.int64,所以引发了错误。

票数 0
EN

Stack Overflow用户

发布于 2021-08-26 00:34:35

好吧,每当我在类似的代码行上遇到错误时,这对我都是有效的(我将把它保存在这里作为参考):

  1. 在输入上调用模型时,请确保x是浮点型。但
  2. 会让目标保持较长时间。
  3. 在模型输出上调用损失,在不进行任何类型转换的情况下调用y。

代码语言:javascript
复制
from torch import nn
net = nn.Linear(input, out_classes)
loss_criterion = nn.CrossEntropyLoss()

net = net.to(device)
X = X.to(device).float()
y = y.to(device).long()

y_hat = net(X)
l = loss_criterion(y_hat, y)
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51818225

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档