The FASRC cluster has a number of nodes that have NVIDIA Tesla general purpose graphics processing units (GPGPU) attached to them. It is possible to use CUDA tools to run computational work on them and in some use cases see very significant speedups.
Fifteen nodes with 4 V100 per node is available for general use from the
gpu partition; the remaining are nodes are owned by various research groups available and may be available when idle through
gpu_requeue. FAS members have access to the
fas_gpu partition which has 64 nodes with 2xK80s, 16 nodes with 2xK20Xm, and 8 nodes with 2xK20m. Direct access to these nodes by members of other groups is by special request. Please visit the RC Portal and submit a help request for more information.
GPGPU's on SLURM
To request a single GPU on slurm just add
#SBATCH --gres=gpu to your submission script and it will give you access to a GPU. To request multiple GPUs add
#SBATCH --gres=gpu:n where 'n' is the number of GPUs. You can use this method to request both CPUs and GPGPUs independently. So if you want 1 CPU and 2 GPUs from our general use GPU nodes in the 'gpu' partition, you would specify:
#SBATCH -p gpu #SBATCH -n 1 #SBATCH --gres=gpu:2 #SBATCH --gpu-freq=high
For an interactive session to work with the GPUs you can use following. While on GPU node, you can run
nvidia-smito get information about the assigned GPU's.
srun --pty -p gpu -t 0-06:00 --mem 8000 --gres=gpu:1 /bin/bash
The current version of the Nvidia driver installed on all GPU-enabled nodes on the cluster cluster is 396.26, which supports Cuda version 9.
To load the toolkit and additional runtime libraries (cublas, cufftw, ...) remember to always load the module for
cuda in your Slurm job script or interactive session.
>$ module load cuda/9.0-fasrc02
NOTE: In the past our Cuda installations were heterogeneous and different nodes on the cluster would provide different versions of the Cuda driver. For this reason might have used in your job submissions the Slurm flags
--constraint=cuda-$version(for example --constraint=cuda-7.5) to specifically request nodes that were supporting that version.
This is no longer needed as our cuda modules are the same throughout the cluster, and you should remove those flags from your scripts.
Using CUDA-dependent modules
CUDA-dependent applications are accessed on the cluster in a manner that is similar to compilers and MPI libraries. For these applications, a CUDA module must first be loaded before an application is available. For example, to use cuDNN, a CUDA-based neural network library from NVIDIA, the following command will work:
If you don't load the CUDA module first, the cuDNN module is not available.
$ module load cudnn/7.0_cuda9.0-fasrc01
Lmod has detected the following error:
The following module(s) are unknown: "cudnn/7.0_cuda9.0-fasrc01"
module-queryor our user Portal to find available versions and how to load them.
More information on software modules can be found here, and how to run jobs here.
See an example on how use the cuda module to install and use Tensorflow.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Permissions beyond the scope of this license may be available at Attribution.