在优化内核时,我尝试为编译提供每个线程要处理的最大数量的寄存器。我有一个1300个点的网格,我可以任意地把它们分成块同时工作。考虑到我的CUDA设备(GTX 460,comute Performance2.1)支持每SM 32,768个寄存器,我的数学技巧告诉我,最多有两个672个线程块导致
32,768 / 1344 = 24
每个线程的寄存器。
编译我的内核
__global__ void
__launch_bounds__(672, 2)
moduleB3(...)结果:
ptxas : info : Compiling entry function _Z8moduleB3PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_PiS_S_S_S_S_S_ffffiffffiiffii' for 'sm_20'
ptxas : info : Function properties for _Z8moduleB3PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_PiS_S_S_S_S_S_ffffiffffiiffii
48 bytes stack frame, 84 bytes spill stores, 44 bytes spill loads
ptxas : info : Used 20 registers, 184 bytes cmem[0], 24 bytes cmem[16]当不提供launch_bounds()时,寄存器的使用率要高得多。我实际上有几个内核,其中任何一个寄存器使用的最大数量都是20个,而我怀疑这是24个。对我的考虑有什么有根据的猜测吗?
编辑:的问题是,当指定启动边界时,寄存器的使用会减少。以下是没有启动边界的编译器的输出:
ptxas : info : Compiling entry function _Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii' for 'sm_21'
ptxas : info : Function properties for _Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii
0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas : info : Used 56 registers, 140 bytes cmem[0], 40 bytes cmem[16]这里有__launch_bounds(672,2):
ptxas : info : Compiling entry function '_Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii' for 'sm_21'
ptxas : info : Function properties for _Z10moduleA2_1PfS_S_S_S_S_S_S_S_S_S_S_S_S_S_S_fffffffiiii
120 bytes stack frame, 156 bytes spill stores, 124 bytes spill loads
ptxas : info : Used 20 registers, 140 bytes cmem[0], 40 bytes cmem[16]据我所知,编译器宁愿使用更多寄存器,但由于重新来源的限制而不能使用。然而,使用的登记册加起来并不等于现有的32 768份。如前所述,每个线程的上限应为24个寄存器。如果编译器选择实现一些计数较低的内核,我可以理解,但是我的内核的none (它使用更多没有启动边界的寄存器)请求的请求超过20。
我不认为发布内核会有任何好处,但你当然可以看看。以下是(希望)最简单的一个:
__global__ void
__launch_bounds__(672, 2)
moduleA2_1(float *d_t, float *d_x, float *d_p, float *d_rho, float *d_b, float *d_u,
float *d_ua, float *d_us, float *d_qa, float *d_qs, float *d_dlna,
float *d_cs, float *d_va, float *d_ma, float *d_uc2, float *d_rhs,
float k_b, float m_h, float gamma, float PI, float Gmsol, float r_sol, float fourpoint_constant, int radius, int numNodes, int numBlocks_A2_1, int numGridsPerSM)
{
int idx, idg, ids;
//input
float t, p, rho, b, u, ua, us, qa, qs, dlna;
//output
float a2, cs, va, ms, ma, vs12, vs22, uc2, dlna2, rhs;
extern volatile __shared__ float smemA21[];
float volatile *s_lna2;
s_lna2 = &smemA21[0];
ids = blockIdx.x / numBlocks_A2_1;
idx = (blockIdx.x % numBlocks_A2_1) * (blockDim.x - 2*radius) + threadIdx.x - radius;
idg = numGridsPerSM * ids;
while(idg < numGridsPerSM * (ids + 1))
{
if(idx >= 0 && idx < numNodes)
{
t = d_t[idg * numNodes + idx];
p = d_p[idg * numNodes + idx];
rho = d_rho[idg * numNodes + idx];
b = d_b[idg * numNodes + idx];
u = d_u[idg * numNodes + idx];
ua = d_ua[idg * numNodes + idx];
us = d_us[idg * numNodes + idx];
qa = d_qa[idg * numNodes + idx];
qs = d_qs[idg * numNodes + idx];
dlna = d_dlna[idg * numNodes + idx];
}
//computeA2(i); // isothermal sound speed (squared)
a2 = k_b / m_h * t;
//computeLna2(i);
s_lna2[threadIdx.x] = (float)log(a2);
//computeCs(i); // adiabatic sound speed
cs = gamma * p / rho;
d_checkInf(&cs);
cs = sqrt(cs);
//computeVa(i); // Alfven speed
va = b / (float)sqrt(4*PI*1E-7*rho);
d_checkInf(&va);
//computeMs(i); // sonic Mach number
ms = u / cs;
d_checkInf(&ms);
if(ms < FLT_MIN)
ms = FLT_MIN;
//computeMa(i); // Alfven Mach number
ma = u / va;
d_checkInf(&ma);
if(ma < FLT_MIN)
ma = FLT_MIN;
//computeUc2(i); // critival speed (squared)
uc2 = a2 + ua / (4 * rho) * (1 + 3 * ma)/(1 + ma) + 8 * us / (3 * rho) * (ms)/(1 + ms);
//computeVs12(i); // support value 1
vs12 = us / (3 * rho) * (1 - 7 * ms)/(1 + ms);
//computeVs22(i); // support value 2
vs22 = 4 * us / (3 * rho) * (ms - 1)/(ms + 1);
__syncthreads();
//fourpointLna2(i);
if((threadIdx.x > radius-1) && (threadIdx.x < blockDim.x - radius) && (idx < numNodes))
{
if (idx == 0) // FO-forward difference
dlna2 = (s_lna2[threadIdx.x+1] - s_lna2[threadIdx.x])/(d_x[idg * numNodes + idx+1] - d_x[idg * numNodes + idx]);
else if (idx == numNodes - 1) // FO-rearward difference
dlna2 = (s_lna2[threadIdx.x] - s_lna2[threadIdx.x-1])/(d_x[idg * numNodes + idx] - d_x[idg * numNodes + idx-1]);
else if (idx == 1 || idx == numNodes - 2) //SO-central difference
dlna2 = (s_lna2[threadIdx.x+1] - s_lna2[threadIdx.x-1])/(d_x[idg * numNodes + idx+1] - d_x[idg * numNodes + idx-1]);
else if(idx > 1 && idx < numNodes - 2 && threadIdx.x > 1 && threadIdx.x < blockDim.x - 2)
dlna2 = fourpoint_constant * ((s_lna2[threadIdx.x+1] - s_lna2[threadIdx.x-1])/(d_x[idg * numNodes + idx+1] - d_x[idg * numNodes + idx-1])) + (1-fourpoint_constant) * ((s_lna2[threadIdx.x+2] - s_lna2[threadIdx.x-2])/(d_x[idg * numNodes + idx+2] - d_x[idg * numNodes + idx-2]));
else
dlna2 = 0;
}
//par_computeRhs();
if(idx >= 0 && idx < numNodes)
{
if (u == 0)
rhs = - Gmsol / (float)pow(d_x[idg * numNodes + idx] + r_sol, 2) + (uc2 + vs12) * dlna - (a2 + vs22) * dlna2;
else
rhs = - Gmsol / (float)pow(d_x[idg * numNodes + idx] + r_sol, 2) + (uc2 + vs12) * dlna - (a2 + vs22) * dlna2 + 1 / rho * (qa / (2.0f*(u + va)) + 4.0f * qs / (3.0f*(u + cs)));
}
//par_calcSurfaceValues();
if(threadIdx.x > radius-1 && threadIdx.x < blockDim.x - radius && idx < numNodes)
{
d_cs[idg * numNodes + idx] = cs;
d_va[idg * numNodes + idx] = va;
d_ma[idg * numNodes + idx] = ma;
d_uc2[idg * numNodes + idx] = uc2;
d_rhs[idg * numNodes + idx] = rhs;
}
idg++;
}
}谢谢你抽出时间。
发布于 2013-03-12 12:54:44
这看起来像ptxas中的一个bug。作为解决办法,您可以将内核编译为PTX,然后在内核代码开始时更改代码行
.maxntid 672, 1, 1
.minnctapersm 2至
.maxnreg 24然后编译PTX文件。这将为您提供一个真正使用24个寄存器的内核。
顺便说一句,对这个内核进行剖析是很有趣的,看看它是否真的可以在每个SM下运行两个块,或者是否有一些没有文档说明的原因,为什么这是无法实现的。
https://stackoverflow.com/questions/15339088
复制相似问题