This is an old revision of the document!
The process for submitting GPU jobs on the mwgpu queue is described below. We have made the initial assumption, until we get more data and some experience, to implement the following simplistic model:
An “elim” has been written for the Lava scheduler that reports the number of available idle GPUs. To write and set up an “elim” read this page: eLIM. We can view what the scheduler has access to using lsload
to observe the idle GPUs. After we submit the job, the scheduler reserves the GPU. That can be viewed with the command bhosts
. Note that the GPU could still be idle, it may take a bit of time for the code to spin up and actually use that GPU.
[hmeij@sharptail sharptail] lsload -l n33 HOST_NAME status r15s r1m r15m ut pg io ls it tmp swp mem gpu <--- n33 ok 25.0 26.1 26.0 80% 5.0 710 3 1464 72G 25G 30G 4.0 [hmeij@sharptail sharptail]$ bsub < run.gpu Job <23259> is submitted to queue <mwgpu>. [hmeij@sharptail sharptail]$ bjobs JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME 23259 hmeij RUN mwgpu sharptail n33 test Aug 15 [hmeij@sharptail sharptail]$ bhosts -l n33 HOST n33 STATUS CPUF JL/U MAX NJOBS RUN SSUSP USUSP RSV DISPATCH_WINDOW ok 100.00 - 28 5 5 0 0 0 - CURRENT LOAD USED FOR SCHEDULING: r15s r1m r15m ut pg io ls it tmp swp mem gpu Total 0.1 0.1 0.1 64% 2.2 188 1 3028 72G 27G 56G 3.0 Reserved 0.0 0.0 0.0 0% 0.0 0 0 0 0M 0M 6144M 1.0 <---
With gpu-info
we can view our running job. gpu-info
and gpu-free
are available http://ambermd.org/gpus/ (I had to hard code my GPU string information as they came in at 02,03,82&83, you can use deviceQuery to find them).
[hmeij@sharptail sharptail]$ ssh n33 gpu-info unloading gcc module ==================================================== Device Model Temperature Utilization ==================================================== 0 Tesla K20m 25 C 0 % 1 Tesla K20m 27 C 0 % 2 Tesla K20m 32 C 99 % 3 Tesla K20m 21 C 0 % ====================================================
This is obviously our job running on GPU instance '2'. But Amber reports that GPU instance '0' is being used. Here is what mdout reports:
| CUDA Capable Devices Detected: 1 | CUDA Device ID in use: 0
The reason for this is that CUDA_VISIBLE_DEVICES is used in the wrapper program to mask the GPU instance IDs. So the job got instructed that a GPU is available and it is the first one, thus '0'. So if all the GPU are running it's hard to find your job. So inside the wrapper we can grab the real GPU instance ID and report it to standard out. For this job the STDOUT reports that just before MPIRUN is started. (With Lava look for STDOUT in a file called ~/.lsbatch/[0-9]*.LSF_JOBPID.out).
GPU allocation instance n33:2 executing: /cm/shared/apps/mvapich2/gcc/64/1.6/bin/mpirun_rsh -ssh -hostfile \ /home/hmeij/.lsbatch/mpi_machines -np 1 pmemd.cuda.MPI \ -O -o mdout.23259 -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10 -ref inpcrd
The code that assigns and inserts and handles the CUDA_VISIBLE_DEVICES is shown in the lava.mvapich2.wrapper section below.
The program below shows examples of how to run Amber and Lammps jobs. Please note that you should always reserve a GPU (gpu=1) otherwise jobs may crash and GPUs can become over committed. You may wish to perform a cpu run only for comparisons (gpu=0) or debugging purposes (but will be limited to a max of 4 job slots per node). Do not launch GPU enabled software during such a run. pmemd.cuda.MPI is GPU enable so instead invoke pmemd.MPI. In the case of Lammps you can toggle between GPU enabled software and MPI enabled software using a environment variable. Here is the header of the Lammps input file
# Enable GPU code if variable is set. if "(${GPUIDX} > 0)" then & "suffix gpu" & "newton off" & "package gpu force 0 ${GPUIDX} 1.0" # "package gpu force 0 ${GPUIDX} 1.0 threads_per_atom 2" echo both
NAMD works slightly different again and has everything build in, also it's hostfile is a bit different. The wrapper will set up the environment and invoke NAMD with several preset flags and then add whatever arguments you provide. Again, -n will match the number of GPUs allocated on a single node. For example
GPU allocation instance n37:1,3,0 charmrun /cm/shared/apps/namd/ibverbs-smp-cuda/2013-06-02//namd2 \ +p3 ++nodelist /home/hmeij/.lsbatch/mpi_machines +idlepoll +devices 1,3,0 \ apoa1/apoa1.namd
#!/bin/bash # submit via 'bsub < run.gpu' rm -f mdout.[0-9]* auout.[0-9]* #BSUB -e err #BSUB -o out #BSUB -q mwgpu #BSUB -J test ## leave sufficient time between job submissions (30-60 secs) ## the number of GPUs allocated matches -n value automatically ## always reserve GPU (gpu=1), setting this to 0 is a cpu job only ## reserve 6144 MB (5 GB + 20%) memory per GPU ## run all processes (1<=n<=4)) on same node (hosts=1). #BSUB -n 1 #BSUB -R "rusage[gpu=1:mem=6144],span[hosts=1]" # unique job scratch dirs MYSANSCRATCH=/sanscratch/$LSB_JOBID MYLOCALSCRATCH=/localscratch/$LSB_JOBID export MYSANSCRATCH MYLOCALSCRATCH cd $MYSANSCRATCH # AMBER # stage the data cp ~/sharptail/* . # feed the wrapper lava.mvapich2.wrapper pmemd.cuda.MPI \ -O -o mdout.$LSB_JOBID -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10 -ref inpcrd # save results cp mdout.[0-9]* ~/sharptail/ # LAMMPS # GPUIDX=1 use allocated GPU(s), GPUIDX=0 cpu run only (view header au.inp) export GPUIDX=1 # stage the data cp ~/sharptail/* . # feed the wrapper lava.mvapich2.wrapper lmp_nVidia \ -c off -var GPUIDX $GPUIDX -in au.inp -l auout.$LSB_JOBID # save results cp auout.[0-9]* ~/sharptail/
#!/bin/bash # submit via 'bsub < run.gpu' rm -f mdout.[0-9]* auout.[0-9]* apoa1out.[0-9]* #BSUB -e err #BSUB -o out #BSUB -q mwgpu #BSUB -J test ## leave sufficient time between job submissions (30-60 secs) ## the number of GPUs allocated matches -n value automatically ## always reserve GPU (gpu=1), setting this to 0 is a cpu job only ## reserve 6144 MB (5 GB + 20%) memory per GPU ## run all processes (1<=n<=4)) on same node (hosts=1). #BSUB -n 1 #BSUB -R "rusage[gpu=1:mem=6144],span[hosts=1]" # unique job scratch dirs MYSANSCRATCH=/sanscratch/$LSB_JOBID MYLOCALSCRATCH=/localscratch/$LSB_JOBID export MYSANSCRATCH MYLOCALSCRATCH cd $MYSANSCRATCH # AMBER # stage the data cp -r ~/sharptail/* . # feed the wrapper lava.mvapich2.wrapper pmemd.cuda.MPI \ -O -o mdout.$LSB_JOBID -inf mdinfo.1K10 -x mdcrd.1K10 -r restrt.1K10 -ref inpcrd # save results cp mdout.$LSB_JOBID ~/sharptail/ # LAMMPS # GPUIDX=1 use allocated GPU(s), GPUIDX=0 cpu run only (view header au.inp) export GPUIDX=1 # stage the data cp -r ~/sharptail/* . # feed the wrapper lava.mvapich2.wrapper lmp_nVidia \ -c off -var GPUIDX $GPUIDX -in au.inp -l auout.$LSB_JOBID # save results cp auout.$LSB_JOBID ~/sharptail/ # NAMD # signal that this is charmrun/namd job export CHARMRUN=1 # stage the data cp -r ~/sharptail/* . # feed the wrapper lava.mvapich2.wrapper \ apoa1/apoa1.namd > apoa1out.$LSB_JOBID # save results cp apoa1out.$LSB_JOBID ~/sharptail/
**Back