This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:134 [2014/08/19 15:33] hmeij [High Throughput] |
cluster:134 [2014/08/22 13:05] (current) hmeij [MPI] |
||
---|---|---|---|
Line 17: | Line 17: | ||
* logs to files not mysql for now | * logs to files not mysql for now | ||
* change some settings in slurm.conf, particularly | * change some settings in slurm.conf, particularly | ||
- | * FirstJObId, MaxJobId | + | * FirstJObId(1), MaxJobId(999999) |
- | * MaxJobCount=100000 | + | * MaxJobCount=120000 |
* MaxTaskPerNode=65533 | * MaxTaskPerNode=65533 | ||
- | * SRunEpilog/ | ||
* | * | ||
Then I created a simple file to test Slurm | Then I created a simple file to test Slurm | ||
Line 56: | Line 55: | ||
# time to process queues of different sizes (I stage them with the --begin parameter) | # time to process queues of different sizes (I stage them with the --begin parameter) | ||
# jobs do have to open the output files though, just some crude testing of slurm | # jobs do have to open the output files though, just some crude testing of slurm | ||
- | # scheduling | + | # scheduling |
^NrJobs^1, | ^NrJobs^1, | ||
Line 67: | Line 66: | ||
Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2). | Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2). | ||
- | My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here [[cluster: | + | My slurm compute nodes (v1-v8) are created in a virtual KVM environment on another PE2950 (2.6 Ghz, 16 GB ram) dual quad core with hyperthreading and virtualization turned on in the BIOS. Comments on how to build that KVM environment are here [[cluster: |
From slurm.conf | From slurm.conf | ||
Line 102: | Line 101: | ||
[[https:// | [[https:// | ||
+ | |||
+ | Vanilla out of the box with these changes | ||
* MaxJobCount=120000 | * MaxJobCount=120000 | ||
Line 112: | Line 113: | ||
|100, | |100, | ||
- | The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we' | + | The N=16 is 8 one core VMs and swallowtail itself (8 cores). So we' |
Line 120: | Line 121: | ||
</ | </ | ||
- | After fixing that. (I also added a proplog/ | + | After fixing that. Hmmm. |
+ | ^NrJobs^N^hh: | ||
+ | | 1, | ||
+ | |10, | ||
+ | |15, | ||
+ | |20, | ||
- | ^NrJobs^N^hh: | ||
- | |50, | ||
+ | Debug Level is 3 above. Falling back to proctrack/ | ||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | | 1, | ||
+ | |10, | ||
+ | |25, | ||
+ | |50, | ||
+ | |75, | ||
+ | |100, | ||
+ | |||
+ | Next I will add a proplog/ | ||
+ | / | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | / | ||
+ | |||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --output=" | ||
+ | #SBATCH --begin=10: | ||
+ | |||
+ | # unique job scratch dir | ||
+ | export MYLOCALSCRATCH=/ | ||
+ | cd $MYLOCALSCRATCH | ||
+ | pwd | ||
+ | |||
+ | echo " | ||
+ | date >> foo | ||
+ | cat foo | ||
+ | |||
+ | / | ||
+ | </ | ||
+ | |||
+ | |||
+ | ^NrJobs^N^hh: | ||
+ | | 1, | ||
+ | | 5, | ||
+ | |10, | ||
+ | |25, | ||
+ | |||
+ | |||
+ | ==== MPI ==== | ||
+ | |||
+ | With '' | ||
+ | |||
+ | < | ||
+ | |||
+ | #!/bin/bash | ||
+ | #/ | ||
+ | |||
+ | #SBATCH --job-name=" | ||
+ | #SBATCH --ntasks=8 | ||
+ | #SBATCH --begin=now | ||
+ | |||
+ | # unique job scratch dir | ||
+ | #export MYLOCALSCRATCH=/ | ||
+ | #cd $MYLOCALSCRATCH | ||
+ | |||
+ | echo " | ||
+ | |||
+ | rm -rf err out logfile mdout restrt mdinfo | ||
+ | |||
+ | export PATH=/ | ||
+ | export LD_LIBRARY_PATH=/ | ||
+ | which mpirun | ||
+ | |||
+ | mpirun / | ||
+ | -i inp/mini.in -p 1g6r.cd.parm -c 1g6r.cd.randions.crd.1 \ | ||
+ | -ref 1g6r.cd.randions.crd.1 | ||
+ | |||
+ | #/ | ||
+ | |||
+ | </ | ||
+ | |||
+ | When submitted we see | ||
+ | |||
+ | < | ||
+ | |||
+ | JOBID PARTITION | ||
+ | 902246 | ||
+ | |||
+ | </ | ||
+ | |||
+ | Dumping the environment we observe some key parameters | ||
+ | |||
+ | < | ||
+ | |||
+ | SLURM_NODELIST=v[1-8] | ||
+ | SLURM_JOB_NAME=MPI | ||
+ | SLURMD_NODENAME=v1 | ||
+ | SLURM_NNODES=8 | ||
+ | SLURM_NTASKS=8 | ||
+ | SLURM_TASKS_PER_NODE=1(x8) | ||
+ | SLURM_NPROCS=8 | ||
+ | SLURM_CPUS_ON_NODE=1 | ||
+ | SLURM_JOB_NODELIST=v[1-8] | ||
+ | SLURM_JOB_CPUS_PER_NODE=1(x8) | ||
+ | SLURM_JOB_NUM_NODES=8 | ||
+ | |||
+ | </ | ||
+ | |||
+ | And in the slurmjob.log file | ||
+ | |||
+ | < | ||
+ | |||
+ | JobId=902245 UserId=hmeij(8216) GroupId=its(623) \ | ||
+ | Name=MPI JobState=COMPLETED Partition=test TimeLimit=UNLIMITED \ | ||
+ | StartTime=2014-08-21T15: | ||
+ | NodeList=v[1-8] NodeCnt=8 ProcCnt=8 WorkDir=/ | ||
+ | |||
+ | </ | ||
- | Debug Level is 3. Maybe go to 1. | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |