我希望找到一种方法来动态计算所需的网格和块大小来进行计算。我遇到了这样一个问题:从线程限制的角度来看,我想要处理的问题太大了,无法在GPU的一次运行中处理。下面是一个示例内核设置,它遇到了我遇到的错误:
__global__ void populateMatrixKernel(char * outMatrix, const int pointsToPopulate)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < pointsToPopulate)
{
outMatrix[i] = 'A';
}
}
cudaError_t populateMatrixCUDA(char * outMatrix, const int pointsToPopulate, cudaDeviceProp &deviceProp)
{
//Device arrays to be used
char * dev_outMatrix = 0;
cudaError_t cudaStatus;
//THIS IS THE CODE HERE I'M WANTING TO REPLACE
//Calculate the block and grid parameters
auto gridDiv = div(pointsToPopulate, deviceProp.maxThreadsPerBlock);
auto gridX = gridDiv.quot;
if (gridDiv.rem != 0)
gridX++; //Round up if we have stragling points to populate
auto blockSize = deviceProp.maxThreadsPerBlock;
int gridSize = min(16 * deviceProp.multiProcessorCount, gridX);
//END REPLACE CODE
//Allocate GPU buffers
cudaStatus = cudaMalloc((void**)&dev_outMatrix, pointsToPopulate * sizeof(char));
if (cudaStatus != cudaSuccess)
{
cerr << "cudaMalloc failed!" << endl;
goto Error;
}
populateMatrixKernel << <gridSize, blockSize >> > (dev_outMatrix, pointsToPopulate);
//Check for errors launching the kernel
cudaStatus = cudaGetLastError();
if (cudaStatus != cudaSuccess)
{
cerr << "Population launch failed: " << cudaGetErrorString(cudaStatus) << endl;
goto Error;
}
//Wait for threads to finish
cudaStatus = cudaDeviceSynchronize();
if (cudaStatus != cudaSuccess) {
cerr << "cudaDeviceSynchronize returned error code " << cudaStatus << " after launching visit and bridger analysis kernel!" << endl;
cout << "Cuda failure " << __FILE__ << ":" << __LINE__ << " '" << cudaGetErrorString(cudaStatus);
goto Error;
}
//Copy output to host memory
cudaStatus = cudaMemcpy(outMatrix, dev_outMatrix, pointsToPopulate * sizeof(char), cudaMemcpyDeviceToHost);
if (cudaStatus != cudaSuccess) {
cerr << "cudaMemcpy failed!" << endl;
goto Error;
}
Error:
cudaFree(dev_outMatrix);
return cudaStatus;
}现在,当我使用以下测试设置测试此代码时:
//Make sure we can use the graphics card (This calculation would be unresonable otherwise)
if (cudaSetDevice(0) != cudaSuccess) {
cerr << "cudaSetDevice failed! Do you have a CUDA-capable GPU installed?" << endl;
}
cudaDeviceProp deviceProp;
cudaError_t cudaResult;
cudaResult = cudaGetDeviceProperties(&deviceProp, 0);
if (cudaResult != cudaSuccess)
{
cerr << "cudaGetDeviceProperties failed!" << endl;
}
int pointsToPopulate = 250000 * 300;
auto gpuMatrix = new char[pointsToPopulate];
fill(gpuMatrix, gpuMatrix + pointsToPopulate, 'B');
populateMatrixCUDA(gpuMatrix, pointsToPopulate, deviceProp);
for (int i = 0; i < pointsToPopulate; ++i)
{
if (gpuMatrix[i] != 'A')
{
cout << "ERROR: " << i << endl;
cin.get();
}
}我在i=81920上有个错误。此外,如果在执行之前和之后检查内存,81920之后的所有内存值都会从'B‘变为null。这个错误似乎起源于内核执行参数代码中的这一行:
int gridSize = min(16 * deviceProp.multiProcessorCount, gridX);对于我的显卡(GTX980M),我得到一个deviceProp.multiProcessorCount值为5,如果我把它乘以16和1024 (每个网格的最大块),我就得到81920。看起来,虽然我在内存空间方面很好,但我被我能运行的线程数量弄得窒息。现在,这16只是被设置为一个任意的值(在看了我朋友做的一些示例代码之后),我想知道是否有一种方法可以根据GPU属性来计算“16应该是什么”,而不是任意设置它。我想要编写一个迭代代码,它能够确定能够在某个时间点执行的最大计算量,然后逐个填充矩阵,但是我需要知道最大的计算值。有人知道计算这些参数的方法吗?如果需要更多的信息,我很乐意帮忙。谢谢!
发布于 2017-04-06 08:52:28
从根本上说,您发布的代码没有任何问题。这可能接近最佳做法。但它与内核的设计成语不兼容。
正如您可以看到的这里,您的GPU能够运行2^31 -1或2147483647块。因此,您可以将所讨论的代码更改为:
unsigned int gridSize = min(2147483647u, gridX);而且它可能会起作用。更好的是,根本不要更改代码,而是将内核更改为如下所示:
__global__ void populateMatrixKernel(char * outMatrix, const int pointsToPopulate)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
for(; i < pointsToPopulate; i += blockDim.x * gridDim.x)
{
outMatrix[i] = 'A';
}
}这样,您的内核将在每个线程中发出多个输出,并且一切都应该按照预期的方式工作。
https://stackoverflow.com/questions/43246191
复制相似问题