Tutorial for Julia on the HPC with GPUs
February 23 2016 in CUDA, HPC, Julia, Programming | Tags: CUDA, gpu, HPC, julia | Author: Christopher Rackauckas
This is a continuous of my previous post on using Julia on the XSEDE Comet HPC. Check that out first for an explanation of the problem. In that problem, we wished to solve for the area of a region where a polynomial was less than 1, which was calculated by code like:
@time res = @sync @parallel (+) for i = imin:dx:imax tmp = 0 isq2 = i*i; isq3 = i*isq2; isq4 = isq2*isq2; isq5 = i*isq4 isq6 = isq4*isq2; isq7 = i*isq6; isq8 = isq4*isq4 @simd for j=jmin:dx:jmax jsq2 = j*j; jsq3= j*jsq2; jsq4 = jsq2*jsq2; jsq5 = j*jsq4; jsq6 = jsq2*jsq4; jsq7 = j*jsq6; jsq8 = jsq4*jsq4 @inbounds tmp += abs(coefs[1]*(jsq2) + coefs[2]*(jsq3) + coefs[3]*(jsq4) + coefs[4]*(jsq5) + coefs[5]*jsq6 + coefs[6]*jsq7 + coefs[7]*jsq8 + coefs[8]*(i) + coefs[9]*(i)*(jsq2) + coefs[10]*i*jsq3 + coefs[11]*(i)*(jsq4) + coefs[12]*i*jsq5 + coefs[13]*(i)*(jsq6) + coefs[14]*i*jsq7 + coefs[15]*(isq2) + coefs[16]*(isq2)*(jsq2) + coefs[17]*isq2*jsq3 + coefs[18]*(isq2)*(jsq4) + coefs[19]*isq2*jsq5 + coefs[20]*(isq2)*(jsq6) + coefs[21]*(isq3) + coefs[22]*(isq3)*(jsq2) + coefs[23]*isq3*jsq3 + coefs[24]*(isq3)*(jsq4) + coefs[25]*isq3*jsq5 + coefs[26]*(isq4) + coefs[27]*(isq4)*(jsq2) + coefs[28]*isq4*jsq3 + coefs[29]*(isq4)*(jsq4) + coefs[30]*(isq5) + coefs[31]*(isq5)*(jsq2) + coefs[32]*isq5*jsq3+ coefs[33]*(isq6) + coefs[34]*(isq6)*(jsq2) + coefs[35]*(isq7) + coefs[36]*(isq8))<1 end tmp end res = res*((imax-imin)*(jmax-jmin)/(length(imin:dx:imax)*length(jmin:dx:jmax))) println(res)
Notice that this code is massively parallel: at every point we do something and we just sum the values. This means it’s a perfect problem for the GPU. I am going to show you how to do this.
Getting CUDArt Setup
We will be using the CUDArt.jl package. I first had this working with the CUDA.jl package but was informed that going forward we should be using the CUDArt.jl package. There’s really not much of a difference in my implementation.
To get started, you have to go here and install an appropriate version of CUDA. Note that you have to have an NVIDIA GPU in order to do this. I will be using a GTX970 on one computer (Windows), and a GTX980Ti on another. Comet has a queue with 2x Tesla K80s, but that’s probably overkill and you should try to develop locally first. One you have all of that setup, CUDArt.jl should compile some .ptx files upon first use. If it does, you’re good to go.
CUDArt.jl’s page shows you how to get started. However, I am going to add a little caveat. In reality, this is the objective function in a machine learning problem, and so we wish to be able to recompute this as quickly as possible just by swapping out the coefficient array coefs. Therefore all of the other arrays must persist on the GPU.
Here’s what we’re going to do. To maximize performance on the GPU we will use Float32’s. On most GPUs (all except the Teslas and the Titan Black), the double precision floating point capabilities are SEVERELY gimped in comparison (1/32 the performance…). To do this, we setup our calculations as follows:
## Setup CPU-side parameters imin = -8 imax = 1 jmin = -3 jmax = 3 @time coefs,powz,poww = getCoefficients(A0,A1,B0,B1,α,β1,β2,β3,β4) iarr = convert(Vector{Float32},collect(imin:dx:imax)) jarr = convert(Vector{Float32},collect(jmin:dx:jmax)) sizei = length(imin:dx:imax) sizej = length(jmin:dx:jmax) cudaCores = 1664; equalDiv = sizei*sizej÷cudaCores + 1 ##GPU Code CUDArt.init(devices(dev->true)) #Initiate the devices dev = device(0) #Set the GPU to the first GPU #Move the arrays over g_iarr = CudaArray(iarr) g_jarr = CudaArray(jarr) g_coefs = CudaArray(coefs) g_tmp = CudaArray(Int32,cudaCores)
Note that if you have multiple GPUs and don’t know which one is which device, you can use the command CUDArt.name(CUDArt.device_properties(number)). Here I have CUDArt.name(CUDArt.device_properties(0)) return “GeForce GTX 970” and CUDArt.name(CUDArt.device_properties(1)) returns “GeForce GTX 750 Ti”, so I use device(0) to use the 970. Note that at any time you can change the device with this command, but be careful since the CudaArrays are stored on the device that you call them from. If you want to use multiple GPUs, you will also need to split up your arrays.
The CUDA Kernal
Now the last thing I need to do is make my function. To do this, we have to go to C. Here is the Cuda kernal for the calculation:
// filename: integration.cu // Performs the inner integration loop extern "C" { __global__ void integration(const float *coefs, const float *iArr, const float *jArr, const int sizei, const int sizej, const int equalDiv, int *tmp) { int index = threadIdx.x + blockIdx.x * blockDim.x; int loopInd; float i, j, isq2, isq3, isq4, isq5, isq6, isq7, isq8, jsq2, jsq3, jsq4, jsq5, jsq6, jsq7, jsq8, int ans = 0; for(loopInd=0;loopInd<equalDiv;loopInd=loopInd+1){ i = iArr[(index*equalDiv+loopInd)/sizej]; j = jArr[(index*equalDiv+loopInd)%sizej]; if(index*equalDiv+loopInd >= sizei*sizej){ break; } if((index*equalDiv+loopInd)%sizej==0 || loopInd==0){ isq2 = i*i; isq3 = i*isq2; isq4 = isq2*isq2; isq5 = i*isq4; isq6 = isq4*isq2; isq7 = i*isq6; isq8 = isq4*isq4; } jsq2 = j*j; jsq3 = j*jsq2; jsq4 = jsq2*jsq2; jsq5 = j*jsq4; jsq6 = jsq2*jsq4; jsq7 = j*jsq6; jsq8 = jsq4*jsq4; /*tmp[index*1878+loopInd]= index*1878+loopInd;*/ /* changed to zero indexing */ ans = ans + (abs(coefs[0]*(jsq2) + coefs[1]*(jsq3) + coefs[2]*(jsq4) + coefs[3]*(jsq5) + coefs[4]*jsq6 + coefs[5]*jsq7 + coefs[6]*jsq8 + coefs[7]*(i) + coefs[8]*(i)*(jsq2) + coefs[9]*i*jsq3 + coefs[10]*(i)*(jsq4) + coefs[11]*i*jsq5 + coefs[12]*(i)*(jsq6) + coefs[13]*i*jsq7 + coefs[14]*(isq2) + coefs[15]*(isq2)*(jsq2) + coefs[16]*isq2*jsq3 + coefs[17]*(isq2)*(jsq4) + coefs[18]*isq2*jsq5 + coefs[19]*(isq2)*(jsq6) + coefs[20]*(isq3) + coefs[21]*(isq3)*(jsq2) + coefs[22]*isq3*jsq3 + coefs[23]*(isq3)*(jsq4) + coefs[24]*isq3*jsq5 + coefs[25]*(isq4) + coefs[26]*(isq4)*(jsq2) + coefs[27]*isq4*jsq3 + coefs[28]*(isq4)*(jsq4) + coefs[29]*(isq5) + coefs[30]*(isq5)*(jsq2) + coefs[31]*isq5*jsq3+ coefs[32]*(isq6) + coefs[33]*(isq6)*(jsq2) + coefs[34]*(isq7) + coefs[35]*(isq8))<1); } tmp[index] = ans; } }
Okay, that’s not nice looking at all, but it’s the same idea as the Julia script. Let me explain this code a little bit. Our function is integration. It takes in all of the variables and an array tmp which we will save the output to. The first call calculates a unique index for each CUDA core. We then setup our variables. Notice that before that we setup equalDiv to calculate how many calculations each core should do in order to evenly divide the work. Thus we loop over equalDiv times (and check if we go over since this will not necessarily come out perfectly). This means that each CUDA core will calculate the same number of $$(i,j)$$’s. We then use the calculated index and the loop index to get our $$i$$ and $$j$$. We index down $$j$$ first, then go by $$i$$’s. This means it’s ordered as $$(0,0),(0,1),\ldots,(0,J),(1,0),\ldots$$. Notice that we achieve this by doing integer division by the size of the j array (truncation will make it an integer), and we get the $$j$$ index by moding by the size of the $$j$$ array. Using these $$i$$’s and $$j$$’s, we calculate the inner part of the loop. Notice that the calls to coefs had to be changed because C using 0-based indexing (as opposed to Julia’s 1-based indexing). Note that we can use the abs function from the CUDA math library. All of the CUDA math library functions are available.
[Note that when doing CUDA programming you want to allocate as little memory as possible. This is still on the small side so it worked out to be the optimal code. However, in some cases the non-unraveled version may be better due to the overhead of memory allocation.]
So there’s a template for you to play around with. When all of that is in order, we have to compile the kernal. On Windows with visual studio, I had to find out where cl.exe was (for me it was in Visual Studio 12’s directory) and tell the compiler the location of it. The total command was:
nvcc -ptx integration.cu -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin"
On Linux it was simpler:
nvcc -ptx integration.cu
From this you get .ptx files. To use the function in Julia, we then use the following code:
md = CuModule("integrationWin.ptx",false) integrationFunc = CuFunction(md,"integration") @time launch(integrationFunc, cudaCores, 64, (g_coefs, g_iarr, g_jarr, sizei,sizej,equalDiv,g_tmp,)); res = sum(to_host(g_tmp)); res = res*((imax-imin)*(jmax-jmin)/(length(imin:dx:imax)*length(jmin:dx:jmax))) println(res)
What this does is take the “integration” function out of the .ptx and save it as integrationFunc. Note here I changed the name of the .ptx to integrationWin.ptx so that way I could also have a Linux compiled version integrationLin.ptx. My main code has an if statement to check for the operating system and uses the appropriate .ptx. Here, I am just using the Windows version. I can then call it at any time with the launch command. The second and third options are the grid and block sizes. In CUDA you always want to “overload”/overthread the cores so that there is no downtime. What I have found in my tests is that the best setting for this is to set the gridsize to the number of CUDA cores (since in our case these aren’t communicating) and overloading it by setting the block size to 64. Why 64? That worked out best in testing. Change the numbers around and see what works best for you.
The last argument to launch is the tuple of arguments for our integration function. Notice that there is a trailing comma, this is required. After the computation is finished, the results have been saved directly into g_tmp. Thus we send g_tmp to the host and sum up the results as before. We re-scale by the area of integration and that is the result.
The end result is that the code solves the integration in ~.00005 seconds on a GTX970 (Core i7 4770k) and ~.00001 on a GTX980Ti. So this a major improvement over the 2-node 48-core CPU implementation on the Comet HPC. I noticed that when using a GTX970 on a computer with an AMD FX8530 I got ~.000015 seconds, so the computer you use also effects the graphics card performance to some degree. Still, this is a massive speedup over the .22 seconds I had on the cluster. Again, this problem is like the perfect example problem for GPU computing, so on your problem it may not be this big of a speedup. If it is, great!
Now in the actual objective function, what I want to do is update the coefficients to a new coefficients array, perform this calculation, and then move on. To do this I simply use the code:
if gpuEnabled g_coefs = CudaArray(coefs) launch(integrationFunc, cudaCores, 64, (g_coefs, g_iarr, g_jarr, sizei,sizej,equalDiv,g_tmp,)) res = sum(to_host(g_tmp)) free(g_coefs) else ...
Notice I send coefs over to the GPU, run the kernal to get the answer, and then free up the GPU memory. So in my code I loop over this many times, changing coefs every time. When the entire computation is done, I finish with the command
CUDArt.close(devices(dev->true))
This will clear the GPU.
Comet?
These same steps will work on Comet. You will need to go into an interactive job to compile, but it should work fine. The job script to run Julia on the GPU node is:
#!/bin/bash #SBATCH -A <account> #SBATCH --job-name="jOpt" #SBATCH --output="output/jOpt.%j.%N.out" #SBATCH -p gpu #SBATCH --gres=gpu:1 #SBATCH --nodes=1 #SBATCH --export=ALL #SBATCH --ntasks-per-node=6 #SBATCH -t 48:00:00 module load cuda export SLURM_NODEFILE=`generate_pbs_nodefile` /home/crackauc/julia-a2f713dea5/bin/julia --machinefile $SLURM_NODEFILE /home/crackauc/projectCode/ImprovedSRK/Optimization/cometDriver.jl
Here I am only taking 6 nodes for the other computations, and I tell it to use the gpu node with -p and tell it I want only one GPU resource with –gres=gpu:1. If you want to use multiple GPUs, just change that number. However, you will want to split up the arrays amongst the multiple GPUs, change equalDiv to account for each CUDA core doing less calculations, send an integer to the GPU to tell it which GPU it is, and use that integer to change the indexing so that no calculations are repeated. That sounds like a good exercise. Remember, the first time you run it you will need to load the cmake module and run Pkg.add(“CUDArt”) to install the CUDA Runtime package.
Edit: Externalizing the Function Call
I am editing to show how you can externalize the function call. Notice that our CUDA kernal does two things: we use it to split up the code evenly between cores and run an “inner function”. Thus for readability, lets externalize that function call. Note that since CUDA inlines functions by default there will be no performance penalty for doing so. All we have to do is write a standard C-style function but add __device__ in front of it. This tells the compiler that this function will be compiled for use on the GPU (actually, inline it in the kernals). The full kernal code with the externalized function is then as follows:
// filename: integration.cu // Performs the inner integration loop extern "C" { __device__ int innerFunc(const float *coefs,const float i,const float isq2,const float isq3,const float isq4,const float isq5,const float isq6,const float isq7,const float isq8,const float j,const float jsq2,const float jsq3,const float jsq4,const float jsq5,const float jsq6,const float jsq7,const float jsq8) { return abs(coefs[0]*(jsq2) + coefs[1]*(jsq3) + coefs[2]*(jsq4) + coefs[3]*(jsq5) + coefs[4]*jsq6 + coefs[5]*jsq7 + coefs[6]*jsq8 + coefs[7]*(i) + coefs[8]*(i)*(jsq2) + coefs[9]*i*jsq3 + coefs[10]*(i)*(jsq4) + coefs[11]*i*jsq5 + coefs[12]*(i)*(jsq6) + coefs[13]*i*jsq7 + coefs[14]*(isq2) + coefs[15]*(isq2)*(jsq2) + coefs[16]*isq2*jsq3 + coefs[17]*(isq2)*(jsq4) + coefs[18]*isq2*jsq5 + coefs[19]*(isq2)*(jsq6) + coefs[20]*(isq3) + coefs[21]*(isq3)*(jsq2) + coefs[22]*isq3*jsq3 + coefs[23]*(isq3)*(jsq4) + coefs[24]*isq3*jsq5 + coefs[25]*(isq4) + coefs[26]*(isq4)*(jsq2) + coefs[27]*isq4*jsq3 + coefs[28]*(isq4)*(jsq4) + coefs[29]*(isq5) + coefs[30]*(isq5)*(jsq2) + coefs[31]*isq5*jsq3+ coefs[32]*(isq6) + coefs[33]*(isq6)*(jsq2) + coefs[34]*(isq7) + coefs[35]*(isq8))<1; } __global__ void integration(const float *coefs, const float *iArr, const float *jArr, const int sizei, const int sizej, const int equalDiv, int *tmp) { int index = threadIdx.x + blockIdx.x * blockDim.x; int loopInd; float i; float j; float isq2; float isq3; float isq4; float isq5; float isq6; float isq7; float isq8; float jsq2; float jsq3; float jsq4; float jsq5; float jsq6; float jsq7; float jsq8; int ans = 0; for(loopInd=0;loopInd<equalDiv;loopInd=loopInd+1){ i = iArr[(index*equalDiv+loopInd)/sizej]; j = jArr[(index*equalDiv+loopInd)%sizej]; if(index*equalDiv+loopInd >= sizei*sizej){ break; } if((index*equalDiv+loopInd)%sizej==0 || loopInd==0){ isq2 = i*i; isq3 = i*isq2; isq4 = isq2*isq2; isq5 = i*isq4; isq6 = isq4*isq2; isq7 = i*isq6; isq8 = isq4*isq4; } jsq2 = j*j; jsq3 = j*jsq2; jsq4 = jsq2*jsq2; jsq5 = j*jsq4; jsq6 = jsq2*jsq4; jsq7 = j*jsq6; jsq8 = jsq4*jsq4; /*tmp[index*1878+loopInd]= index*1878+loopInd;*/ /* changed to zero indexing */ ans = ans + innerFunc(coefs,i,isq2,isq3,isq4,isq5,isq6,isq7,isq8,j,jsq2,jsq3,jsq4,jsq5,jsq6,jsq7,jsq8); } tmp[index] = ans; } }
I double-checked and indeed there was no performance difference between this code and the one before. However, now it’s clear that we are simply equally dividing up a loop and calling innerFunc at each point. Next time I will show how to divide this up between multiple GPUs.
Ismael Venegas Castelló
says:There is a still unregistered package Benchmarks by John Myles White that has @benchmark macro, better than @time!
Pkg.clone(“https://github.com/johnmyleswhite/Benchmarks.jl.git”)