首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >什么问题会导致CuDNNError与ConvolutionND

什么问题会导致CuDNNError与ConvolutionND
EN

Stack Overflow用户
提问于 2018-07-19 05:41:53
回答 1查看 338关注 0票数 1

我是使用三维卷积链接(与ConvolutionND)在我的链.

前向计算运行平稳(我检查了中间结果形状,以确保正确理解convolution_nd参数的含义),但在向后的过程中,CuDNNError将与消息CUDNN_STATUS_NOT_SUPPORTED一起引发。

cover_all参数的ConvolutionND作为它的默认值False,所以我不知道什么可能是错误的原因。

下面是我如何找出一个卷积层:

代码语言:javascript
复制
self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(self.GPU_1_ID)

调用堆栈是

代码语言:javascript
复制
File "chainer/function_node.py", line 548, in backward_accumulate
    gxs = self.backward(target_input_indexes, grad_outputs)
File "chainer/functions/connection/convolution_nd.py", line 118, in backward
    gy, W, stride=self.stride, pad=self.pad, outsize=x_shape)
File "chainer/functions/connection/deconvolution_nd.py", line 310, in deconvolution_nd
    y, = func.apply(args)
File chainer/function_node.py", line 258, in apply
    outputs = self.forward(in_data)
File "chainer/functions/connection/deconvolution_nd.py", line 128, in forward
    return self._forward_cudnn(x, W, b)
File "chainer/functions/connection/deconvolution_nd.py", line 105, in _forward_cudnn
    tensor_core=tensor_core)
File "cupy/cudnn.pyx", line 881, in cupy.cudnn.convolution_backward_data
File "cupy/cuda/cudnn.pyx", line 975, in cupy.cuda.cudnn.convolutionBackwardData_v3
File "cupy/cuda/cudnn.pyx", line 461, in cupy.cuda.cudnn.check_status
cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_NOT_SUPPORTED

那么,在使用ConvolutionND时是否需要特别注意呢?

例如,一个失败的代码是:

代码语言:javascript
复制
import chainer
from chainer import functions as F
from chainer import links as L
from chainer.backends import cuda

import numpy as np
import cupy as cp

chainer.global_config.cudnn_deterministic = False

NB_MASKS = 60
NB_FCN = 3
NB_CLASS = 17

class MFEChain(chainer.Chain):
    """docstring for Wavelphasenet."""
    def __init__(self,
                 FCN_Dim,
                 gpu_ids=None):
        super(MFEChain, self).__init__()

        self.GPU_0_ID, self.GPU_1_ID = (0, 1) if gpu_ids is None else gpu_ids
        with self.init_scope():
            self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(
                self.GPU_1_ID
            )

    def __call__(self, inputs):
        ### Pad input ###
        processed_sequences = []
        for convolved in inputs:
            ## Transform to sequences)
            copy = convolved if self.GPU_0_ID == self.GPU_1_ID else F.copy(convolved, self.GPU_1_ID)
            processed_sequences.append(copy)

        reprocessed_sequences = []
        with cuda.get_device(self.GPU_1_ID):
            for convolved in processed_sequences:
                convolved = F.expand_dims(convolved, 0)
                convolved = F.expand_dims(convolved, 0)
                convolved = self.conv1(convolved)

                reprocessed_sequences.append(convolved)

            states = F.vstack(reprocessed_sequences)

            logits = states

            ret_logits = logits if self.GPU_0_ID == self.GPU_1_ID else F.copy(logits, self.GPU_0_ID)
        return ret_logits

def mfe_test():
    mfe = MFEChain(150)
    inputs = list(
        chainer.Variable(
            cp.random.randn(
                NB_MASKS,
                11,
                in_len,
                dtype=cp.float32
            )
        ) for in_len in [53248]
    )
    val = mfe(inputs)
    grad = cp.ones(val.shape, dtype=cp.float32)
    val.grad = grad
    val.backward()
    for i in inputs:
        print(i.grad)

if __name__ == "__main__":
    mfe_test()
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-07-20 09:05:14

cupy.cuda.cudnn.convolutionBackwardData_v3与某些特定参数不兼容,如官方github的一个问题中所述。

不幸的是,这个问题只涉及到deconvolution_2d.py (而不是deconvolution_nd.py),因此,我想,在您的情况下,关于是否使用cudnn的决策是失败的。

你可以通过确认

  1. 检查膨胀参数(!=1)或群参数(!=1)是否传递给卷积。
  2. 打印chainer.config.cudnn_deterministic、configuration.config.autotune和configuration.config.use_cudnn_tensor_core。

通过在官方的github中提出一个问题,可以获得进一步的支持。

你展示的代码非常复杂。

为了澄清这个问题,下面的代码会有所帮助。

代码语言:javascript
复制
from chainer import Variable, Chain
from chainer import links as L
from chainer import functions as F

import numpy as np
from six import print_

batch_size = 1
in_channel = 1
out_channel = 1

class MyLink(Chain):
    def __init__(self):
        super(MyLink, self).__init__()
        with self.init_scope():
            self.conv = L.ConvolutionND(3, 1, 1, (3, 3, 3), nobias=True, initialW=np.ones((in_channel, out_channel, 3, 3, 3)))

    def __call__(self, x):
        return F.sum(self.conv(x))

if __name__ == "__main__":
    my_link = MyLink()
    my_link.to_gpu(0)
    batch = Variable(np.ones((batch_size, in_channel, 3, 3, 3)))
    batch.to_gpu(0)
    loss = my_link(batch)
    loss.backward()
    print_(batch.grad)
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51415107

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档