我有两个版本的相同算法.它最初是卷积,但我修改它以减少它以检查我的瓶颈在哪里(注意每个循环只有一次访问全局内存):
__global__ void convolve (unsigned char * Md, float * Kd, unsigned char * Rd, int width, int height, int kernel_size, int tile_width, int channels){ int row = blockIdx.y*tile_width + threadIdx.y; int col = blockIdx.x*tile_width + threadIdx.x; int sum = 0; int pixel; int local_pixel; int working_pixel; int row_offset = (kernel_size/2)*(width+kernel_size-1); int col_offset = kernel_size/2; for(int color=0; color这是共享内存版本(每个循环一次访问共享内存)
__global__ void convolve (unsigned char * Md, float * Kd, unsigned char * Rd, int width, int height, int kernel_size, int tile_width, int channels){ __shared__ unsigned char Mds[256 + 16*4 +4]; int row = blockIdx.y*tile_width + threadIdx.y; int col = blockIdx.x*tile_width + threadIdx.x; if(row < height && col < width){ int sum = 0; int pixel; //the pixel to copy from Md (the input image) int local_pixel; //the pixel in shared memory int start_pixel; //the offset to copy the borders int mds_width = tile_width+kernel_size-1; int md_width = width+kernel_size-1; int md_height = height+kernel_size-1; for(int color=0; color执行参数是
convolve<<>>(Md,Kd,Rd,width,new_height,kernel_size,block_size,colors); dimGrid = (1376,768) dimBlock = (16,16) Md is the read only image Kd is the filter (3x3) width = 22016 height = 12288 kernel_size = 3 block_size=16 colors=3 我用第一个算法得到1249.59 ms,用第二个算法得到1178.2 ms,我觉得很荒谬.我认为寄存器的数量应该不成问题.用ptxas编译我得到:
ptxas info: 560 bytes gmem, 52 bytes cmem[14] ptxas info: Compiling entry function '_Z8convolvePhPfS_iiiii' for 'sm_10' ptxas info: Used 16 registers, 384 bytes smem, 4 bytes cmem[1]而我的设备的信息是:
Name: GeForce GTX 660 Ti Minor Compute Capability: 0 Major Compute Capability: 3 Warp Size: 32 Max Treads per Block: 1024 Max Threads Dimension: (1024,1024,64) Max Grid Size: (2147483647,65535,65535) Number of SM: 7 Max Threads Per SM: 2048 Regs per Block (SM): 65536 Total global Memory: 2146762752 Shared Memory per Block: 49152有没有人对这种糟糕的改进有任何暗示?我不知道别人问...
编辑:我今天使用不同的nvidia卡,因为我无法访问实验室.它还具有3.0的计算能力.我把两个if语句放在循环之外.我正在使用-arch compute_30 -code编译sm_30我删除了所有的铸件.全局矩阵现在被声明为const unsigned char*restrict Md我这次使用了9x9过滤器,这使得每个像素在被带入共享内存后被重复使用81次.
我从终端获得3138.41 ms(全球版本)和3120.96 ms(共享版本).在视觉分析器中,它需要更长的时间.这就是我得到的(截图) http://cl.ly/image/1X372l242S2u
像我一样迷失..
请在这里找到这个算法易于编译和执行:
http://cl.ly/213l2X3S1v3a
./convolution 8000 4000 159 9 edge_detection_9.txt 0表示全局内存版本./convolution 8000 4000 159 9 edge_detection_9.txt 1表示共享内存版本