我正在使用CUDA7.0和nVidia 980GTX进行一些图像处理。在特定迭代中,通过15-20个内核调用和多个cuFFT FFT/IFFT API调用独立处理多个tiles。
正因为如此,我将每个瓦片放在它自己的CUDA流中,以便每个瓦片相对于主机异步执行它的操作字符串。在迭代中,每个瓦片的大小相同,因此它们共享一个cuFFT计划。主机线程在命令中快速移动,试图保持GPU加载工作。我遇到了周期性的竞争情况,而这些操作是并行处理的,我对cuFFT有一个特别的问题。如果我将一个cuFFT计划放在一个流0中,使用cuFFTSetStream()为块0,并且在主机将共享cuFFT计划的流设置为块1的流1之前,块0的快速傅立叶变换还没有在GPU上执行,然后在GPU上发布块1的工作,那么这个计划的cuFFTExec()的行为是什么?
更简洁地说,在前面的cufftExec调用实际开始/完成之前,是否使用cuFFTSetStream()来更改后续平铺的流,而在调用cufftExec()时设置的流中执行对FFT ()的调用?
我为没有发布代码而道歉,但我无法发布我的实际源代码。
发布于 2016-02-26 07:21:06
FFT :注释中指出的,如果相同的计划(相同的创建句柄)用于通过streams在同一设备上同时执行,则使用。这个问题似乎集中在流行为本身,我剩下的答案也集中在这一点上,但这是一个重要的观点。
如果我将一个cuFFT计划放在一个流0中,使用cuFFTSetStream()为块0,并且块0的cuFFT还没有在图形处理器上实际执行,在主机将共享cuFFT计划的流设置为块1的流1之前,它在GPU上发布了块1的工作,cuFFTExec()对这个计划的行为是什么?
让我假设您说的是流1和流2,这样我们就可以避免对空流的任何可能的混淆。
CUFFT应遵守计划通过cufftExecXXX()传递给CUFFT时为计划定义的流。通过cufftSetStream()对计划的后续更改应该不会对先前发出的cufftExecXXX()调用所使用的流产生任何影响。
我们可以使用分析器通过一个相当简单的测试来验证这一点。考虑以下测试代码:
$ cat t1089.cu
// NOTE: this code omits independent work-area handling for each plan
// which is necessary for a plan that will be shared between streams
// and executed concurrently
#include <cufft.h>
#include <assert.h>
#include <nvToolsExt.h>
#define DSIZE 1048576
#define BATCH 100
int main(){
const int nx = DSIZE;
const int nb = BATCH;
size_t ws = 0;
cufftHandle plan;
cufftResult res = cufftCreate(&plan);
assert(res == CUFFT_SUCCESS);
res = cufftMakePlan1d(plan, nx, CUFFT_C2C, nb, &ws);
assert(res == CUFFT_SUCCESS);
cufftComplex *d;
cudaMalloc(&d, nx*nb*sizeof(cufftComplex));
cudaMemset(d, 0, nx*nb*sizeof(cufftComplex));
cudaStream_t s1, s2;
cudaStreamCreate(&s1);
cudaStreamCreate(&s2);
res = cufftSetStream(plan, s1);
assert(res == CUFFT_SUCCESS);
res = cufftExecC2C(plan, d, d, CUFFT_FORWARD);
assert(res == CUFFT_SUCCESS);
res = cufftSetStream(plan, s2);
assert(res == CUFFT_SUCCESS);
nvtxMarkA("plan stream change");
res = cufftExecC2C(plan, d, d, CUFFT_FORWARD);
assert(res == CUFFT_SUCCESS);
cudaDeviceSynchronize();
return 0;
}
$ nvcc -o t1089 t1089.cu -lcufft -lnvToolsExt
$ cuda-memcheck ./t1089
========= CUDA-MEMCHECK
========= ERROR SUMMARY: 0 errors
$我们只是在连续做两个前向FFT,在两者之间切换数据流。我们将使用nvtx marker来清楚地识别计划流关联更改请求发生的点。现在让我们看一下nvprof --print-api-trace的输出(去掉冗长的启动前导):
983.84ms 617.00us cudaMalloc
984.46ms 21.628us cudaMemset
984.48ms 37.546us cudaStreamCreate
984.52ms 121.34us cudaStreamCreate
984.65ms 995ns cudaPeekAtLastError
984.67ms 996ns cudaConfigureCall
984.67ms 517ns cudaSetupArgument
984.67ms 21.908us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [416])
984.69ms 349ns cudaGetLastError
984.69ms 203ns cudaPeekAtLastError
984.70ms 296ns cudaConfigureCall
984.70ms 216ns cudaSetupArgument
984.70ms 8.8920us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [421])
984.71ms 272ns cudaGetLastError
984.71ms 177ns cudaPeekAtLastError
984.72ms 314ns cudaConfigureCall
984.72ms 229ns cudaSetupArgument
984.72ms 9.9230us cudaLaunch (void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [426])
984.73ms 295ns cudaGetLastError
984.77ms - [Marker] plan stream change
984.77ms 434ns cudaPeekAtLastError
984.78ms 357ns cudaConfigureCall
984.78ms 228ns cudaSetupArgument
984.78ms 10.642us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [431])
984.79ms 287ns cudaGetLastError
984.79ms 193ns cudaPeekAtLastError
984.80ms 293ns cudaConfigureCall
984.80ms 208ns cudaSetupArgument
984.80ms 7.7620us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [436])
984.81ms 297ns cudaGetLastError
984.81ms 178ns cudaPeekAtLastError
984.81ms 269ns cudaConfigureCall
984.81ms 214ns cudaSetupArgument
984.81ms 7.4130us cudaLaunch (void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [441])
984.82ms 312ns cudaGetLastError
984.82ms 152.63ms cudaDeviceSynchronize
$我们看到每个FFT操作需要3个内核调用。在这两者之间,我们看到我们的nvtx标记指示何时发出计划流更改的请求,这并不奇怪,这发生在前3个内核启动之后,但在最后3个内核启动之前。最后,我们注意到,基本上所有的执行时间都被最终的cudaDeviceSynchronize()调用所吸收。前面的所有调用都是异步的,因此在执行的第一毫秒内或多或少地“立即”执行。最终的同步化吸收了6个内核的所有处理时间,总计约150毫秒。
因此,如果cufftSetStream对cufftExecC2C()调用的第一次迭代产生影响,我们预计会看到前3个内核的部分或全部启动到与后3个内核相同的流中。但当我们查看nvprof --print-gpu-trace输出时:
$ nvprof --print-gpu-trace ./t1089
==3757== NVPROF is profiling process 3757, command: ./t1089
==3757== Profiling application: ./t1089
==3757== Profiling result:
Start Duration Grid Size Block Size Regs* SSMem* DSMem* Size Throughput Device Context Stream Name
974.74ms 7.3440ms - - - - - 800.00MB 106.38GB/s Quadro 5000 (0) 1 7 [CUDA memset]
982.09ms 23.424ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 13 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [416]
1.00551s 21.172ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 13 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [421]
1.02669s 27.551ms (25600 1 1) (16 16 1) 61 17.000KB 0B - - Quadro 5000 (0) 1 13 void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [426]
1.05422s 23.592ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 14 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [431]
1.07781s 21.157ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 14 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [436]
1.09897s 27.913ms (25600 1 1) (16 16 1) 61 17.000KB 0B - - Quadro 5000 (0) 1 14 void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [441]
Regs: Number of registers used per CUDA thread. This number includes registers used internally by the CUDA driver and/or tools and can be more than what the compiler shows.
SSMem: Static shared memory allocated per CUDA block.
DSMem: Dynamic shared memory allocated per CUDA block.
$我们看到,实际上前3个内核被发布到第一个流中,最后3个内核被发布到第二个流中,就像请求的那样。(正如api跟踪输出所示,所有内核的总执行时间大约为150ms。)由于底层内核启动是异步的,并且是在cufftExecC2C()调用返回之前发出的,如果仔细考虑这一点,就会得出这样的结论:必须这样做。要启动内核的流是在内核启动时指定的。(当然,我认为这是“首选”行为。)
https://stackoverflow.com/questions/35488348
复制相似问题