User Tools

Site Tools


cluster:134

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:134 [2014/08/21 20:00]
hmeij
cluster:134 [2014/08/22 13:05] (current)
hmeij [MPI]
Line 173: Line 173:
 ==== MPI ==== ==== MPI ====
  
-With sbatch there is no need for a wrapper script, slurm figures it all out.+With ''sbatch'' there is no need for a wrapper script, slurm figures it all out.
  
 <code> <code>
Line 196: Line 196:
 which mpirun which mpirun
  
-mpirun /share/apps/amber/9+openmpi-1.2+intel-9/exe/pmemd -O -i inp/mini.in -p 1g6r.cd.parm -c 1g6r.cd.randions.crd.1 -ref 1g6r.cd.randions.crd.1+mpirun /share/apps/amber/9+openmpi-1.2+intel-9/exe/pmemd -O 
 +-i inp/mini.in -p 1g6r.cd.parm -c 1g6r.cd.randions.crd.1 
 +-ref 1g6r.cd.randions.crd.1
  
 #/share/apps/lsf/slurm_epilog.pl #/share/apps/lsf/slurm_epilog.pl
 +
 +</code>
 +
 +When submitted we see
 +
 +<code>
 +
 +             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
 +            902246      test      MPI    hmeij  R       0:05      8 v[1-8]
 +
 +</code>
 +
 +Dumping the environment we observe some key parameters
 +
 +<code>
 +
 +SLURM_NODELIST=v[1-8]
 +SLURM_JOB_NAME=MPI
 +SLURMD_NODENAME=v1
 +SLURM_NNODES=8
 +SLURM_NTASKS=8
 +SLURM_TASKS_PER_NODE=1(x8)
 +SLURM_NPROCS=8
 +SLURM_CPUS_ON_NODE=1
 +SLURM_JOB_NODELIST=v[1-8]
 +SLURM_JOB_CPUS_PER_NODE=1(x8)
 +SLURM_JOB_NUM_NODES=8
 +
 +</code>
 +
 +And in the slurmjob.log file
 +
 +<code>
 +
 +JobId=902245 UserId=hmeij(8216) GroupId=its(623) \
 +Name=MPI JobState=COMPLETED Partition=test TimeLimit=UNLIMITED \
 +StartTime=2014-08-21T15:55:06 EndTime=2014-08-21T15:57:04 \
 +NodeList=v[1-8] NodeCnt=8 ProcCnt=8 WorkDir=/home/hmeij/1g6r/cd
  
 </code> </code>
cluster/134.1408651222.txt.gz · Last modified: 2014/08/21 20:00 by hmeij