首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >奇怪的cuBLAS gemm批处理性能

奇怪的cuBLAS gemm批处理性能
EN

Stack Overflow用户
提问于 2018-01-30 11:04:38
回答 1查看 2.1K关注 0票数 4

我注意到cublasSgemmStridedBatched的一些奇怪的性能,我正在寻找一个解释。矩阵大小固定在20x20。以下是几个不同批次大小的时间(只有乘法,没有数据传输):

  • 批处理= 100,时间= 0.2 ms
  • 批= 1,000,时间= 1.9毫秒
  • 批= 10,000,时间= 18.3毫秒
  • 批= 100,000,时间= 5.3毫秒
  • 批= 1,000,000,时间= 52.8毫秒

最初的几个批次大小和我预期的一样,当批次大小增加10倍时,时间就线性地增加了。然而,使用100,000个矩阵突然发生3.4倍的加速比?

如果矩阵大小固定在10x10,并且再次执行试用,我会发现:

  • 批处理= 100,时间= 0.2 ms
  • 批= 1,000,时间= 2.0毫秒
  • 批= 10,000,时间=20.0ms
  • 批次=100,时间= 0.9毫秒
  • 批= 1,000,000,时间= 8.9毫秒

同样,在100,000批大小时,22倍的惊人速度会发生吗?我想知道为什么1,000和10,000的批处理大小比批大小100,000慢,因为矩阵大小仍然是10×10。

不同的批次大小是否使用不同的算法?这次表演我觉得很奇怪。当我使用cublasSgemmBatched进行这个试验时,也会出现类似的结果。这些试验是在GeForce GTX 1080 Ti上进行的。提供了最低限度的工作代码:

代码语言:javascript
复制
#include <stdio.h>
#include <stdlib.h>
#include "math.h"
#include "cublas_v2.h" 
//nvcc -lcublas cublas.c -o cublas.out

int main(int argc, char* argv[])
{
int i,j,k,index;

// Linear dimension of matrices
int dim = 20;
int batch_count = 10*10*10*10*10*1;
// Allocate host storage for batch_count A,B,C square matrices
float* h_A = malloc(sizeof(float) * dim * dim * batch_count);
float* h_B = malloc(sizeof(float) * dim * dim * batch_count);
float* h_C = malloc(sizeof(float) * dim * dim * batch_count);
    for(k=0; k<batch_count; k++) {
        for(j=0; j<dim; j++) {
                for(i=0; i<dim; i++) {
                index = i*dim + j + k*dim*dim;
                  h_A[index] = index*index + 0.0f;
                  h_B[index] = index + 1.0f;
                  h_C[index] = 0.0f;
        }
    }
}


float *d_A, *d_B, *d_C;
cudaMalloc(&d_A, sizeof(float) * dim * dim * batch_count);
cudaMalloc(&d_B, sizeof(float) * dim * dim * batch_count);
cudaMalloc(&d_C, sizeof(float) * dim * dim * batch_count);
cudaMemcpy(h_A,d_A,sizeof(float) * dim * dim * batch_count,cudaMemcpyDeviceToHost);
cudaMemcpy(h_B,d_B,sizeof(float) * dim * dim * batch_count,cudaMemcpyDeviceToHost);
cudaMemcpy(h_C,d_C,sizeof(float) * dim * dim * batch_count,cudaMemcpyDeviceToHost);

cublasHandle_t handle;
cublasCreate(&handle);

// Do the actual multiplication 
float time_cuda_event;
cudaEvent_t start, stop;    
cudaEventCreate(&start);
cudaEventCreate(&stop) ;
cudaEventRecord(start, 0);
float alpha = 1.0f;  float beta = 1.0f;
cublasSgemmStridedBatched(handle,
                              CUBLAS_OP_N, 
                              CUBLAS_OP_N,
                              dim, dim, dim,
                              &alpha,
                              (const float*)d_A, dim,
                              dim*dim,
                              (const float*)d_B, dim,
                              dim*dim,
                              &beta,
                              d_C, dim, 
                              dim*dim, 
                              batch_count);
( cudaEventRecord(stop, 0) );
( cudaEventSynchronize(stop) );
( cudaEventElapsedTime(&time_cuda_event, start, stop) );              
printf("Time :  %3.1f ms \n", time_cuda_event);  

cudaMemcpy(h_C,d_C,sizeof(float) * dim * dim * batch_count,cudaMemcpyDeviceToHost);
// Destroy the handle
cublasDestroy(handle);


cudaFree(d_A);
cudaFree(d_B);
cudaFree(d_C);
free(h_A);
free(h_B);
free(h_C);
    return 0;
}
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-01-30 12:50:54

这似乎只是CUBLAS内部启发式的结果。如果我运行您的代码的修改(和工作)版本,我将得到5x5情况下的这些时间:

代码语言:javascript
复制
Batch size :           10   Time :  0.019104 ms 
Batch size :          100   Time :  0.038304 ms 
Batch size :         1000   Time :  0.163520 ms 
Batch size :        10000   Time :  1.410944 ms 
Batch size :       100000   Time :  1.614144 ms 
Batch size :      1000000   Time :  16.057407 ms 

分析显示,在包含10000个条目的批处理中,库运行一个内核:

代码语言:javascript
复制
1.10759s  16.831us             (1 1 10)       (128 1 1)       120  12.250KB        0B         -           -           -           -  GeForce GTX 970         1         7  maxwell_sgemm_128x64_nn [3939]
1.10766s  19.168us            (1 1 100)       (128 1 1)       120  12.250KB        0B         -           -           -           -  GeForce GTX 970         1         7  maxwell_sgemm_128x64_nn [3971]
1.10773s  147.71us           (1 1 1000)       (128 1 1)       120  12.250KB        0B         -           -           -           -  GeForce GTX 970         1         7  maxwell_sgemm_128x64_nn [4003]
1.10791s  1.4064ms          (1 1 10000)       (128 1 1)       120  12.250KB        0B         -           -           -           -  GeForce GTX 970         1         7  maxwell_sgemm_128x64_nn [4035]

虽然在更大的大小下,它运行对另一个内核的多个调用来服务该调用:

代码语言:javascript
复制
1.10935s  1.1518ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4063]
1.11050s  606.54us          (1 1 34465)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4087]
1.11113s  1.1498ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4115]
1.11228s  1.1501ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4139]
1.11344s  1.1511ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4163]
1.11459s  1.1494ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4187]
1.11574s  1.1507ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4211]
1.11689s  1.1503ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4235]
1.11804s  1.1499ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4259]
1.11919s  1.1507ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4283]
1.12035s  1.1507ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4307]
1.12150s  1.1509ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4331]
1.12265s  1.1489ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4355]
1.12380s  1.1496ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4379]
1.12495s  1.1500ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4403]
1.12610s  1.1494ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4427]
1.12726s  1.1503ms          (1 1 65535)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4451]
1.12841s  299.35us          (1 1 16975)       (16 16 1)        31  2.1250KB        0B         -           -           -           -  GeForce GTX 970         1         7  void batch_gemm_kernel1x1_core<float, float, float, bool=0, bool=0, bool=0, bool=0, bool=0, bool=1, bool=1>(float* const *, float const * const *, float const * const *, float*, float const *, float const *, int, int, int, int, int, int, __int64, __int64, __int64, float const *, float const *, float, float, int, int) [4475]

您所观察到的不一致性似乎是由库中的一个内核更改为另一个内核造成的,这可能是由某些批处理大小标准造成的。您可以看到,这两个内核似乎每个批处理都使用一个块,内核在较大的大小时使用的是带有256个线程的2D块,而较小的内核使用的是包含128个线程的1D块。除此之外,性能差异取决于内部实现细节。尽管这样做可能违反了最终用户许可,但如果您想了解更多,就需要拆卸内核并查看它们是如何工作的。该工具包包含了完成此操作所需的所有工具,尽管我并不建议您这样做。

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/48519861

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档