cluster:111
This is an old revision of the document!
Amber GPU Testing (EC)
We are interested in benchmarking the serial, MPI, cuda and cuda.MPI versions of pmemd.
First we get some CPU based data.
# serial run of pmemd nohup $AMBERHOME/bin/pmemd -O -i mdin -o mdout -p prmtop \ -c inpcrd -r restrt -x mdcrd </dev/null & # parallel run, note that you will need create the machinefile # if -np=4 it would would contain 4 lines with the string 'localhost'...does not work, use hostname mpirun --machinefile=nodefile -np 4 $AMBERHOME/bin/pmemd.MPI \ -O -i mdin -o mdout -p prmtop \ -c inpcrd -r restrt -x mdcrd </dev/null &
The following script should be in your path … located in ~/bin
You need to allocate one or more GPUs for your cuda runs.
node2$ gpu-info ==================================================== Device Model Temperature Utilization ==================================================== 0 Tesla K20 27 C 0 % 1 Tesla K20 28 C 0 % 2 Tesla K20 27 C 0 % 3 Tesla K20 30 C 0 % ====================================================
Next we need to expose these GPUs to pmemd …
# expose one export CUDA_VISIBLE_DEVICES="0" # serial run of pmemd.cuda nohup $AMBERHOME/bin/pmemd.cuda -O -i mdin -o mdout -p prmtop \ -c inpcrd -r restrt -x mdcrd </dev/null & # parallel run, note that you will need create the machinefile # if -np=4 it would could contain 4 lines with the string 'localhost' mpirun --machinefile=nodefile -np 4 $AMBERHOME/bin/pmemd.cuda.MPI \ -O -i mdin -o mdout -p prmtop \ -c inpcrd -r restrt -x mdcrd </dev/null &
You may want to try to run your pmemd problem across multiple GPUs if problem set is large enough.
# expose multiple (for serial or parallel runs) export CUDA_VISIBLE_DEVICES="0,2"
cluster/111.1359813899.txt.gz · Last modified: by hmeij
