This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:134 [2014/08/14 14:59] hmeij [Slurm] |
cluster:134 [2014/08/14 15:10] hmeij [Slurm] |
||
---|---|---|---|
Line 53: | Line 53: | ||
^v1 ^v2 ^v3 ^v4 ^v5 ^v6 ^v7 ^v8 ^ | ^v1 ^v2 ^v3 ^v4 ^v5 ^v6 ^v7 ^v8 ^ | ||
|3138|3130|3149|3133|3108|3119|3110|3113 | |3138|3130|3149|3133|3108|3119|3110|3113 | ||
+ | |||
+ | # time to process queues of different sizes (I stage them with the --begin parameter) | ||
+ | # jobs do have to open the output files though, just some crude testing of slurm | ||
+ | # scheduling prowness | ||
+ | |||
+ | ^NrJobs^1, | ||
+ | |mm:ss | 0:33| 6:32| 19:37| | ||
+ | |||
+ | # 20 mins for 25,000 jobs via sbatch | ||
</ | </ | ||
- | Slurm is installed on a PE2950 with dual quad cores and 16 GB. It is part of my high priority queue and allocated to Openlava (v2.2). | + | Slurm is installed on a PE2950 with dual quad cores and 16 GB of memory. It is part of my high priority queue and allocated to Openlava (v2.2). |
- | My nodes are created in a virtual KVM environment | + | My slurm compute |
+ | |||
+ | From slurm.conf | ||
+ | |||
+ | < | ||
+ | |||
+ | # COMPUTE NODES | ||
+ | NodeName=v[1-8] NodeAddr=192.168.150.[1-8] CPUs=1 RealMemory=1 \ | ||
+ | Sockets=1 CoresPerSocket=1 ThreadsPerCore=1 State=UNKNOWN | ||
+ | NodeName=swallowtail NodeAddr=192.168.1.136 CPUs=8 RealMemory=16 \ | ||
+ | Sockets=2 CoresPerSocket=4 ThreadsPerCore=1 State=UNKNOWN | ||
+ | |||
+ | PartitionName=test Nodes=v[1-8] Default=YES MaxTime=INFINITE State=UP | ||
+ | |||
+ | </ | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |