This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:207 [2021/10/12 14:07] hmeij07 |
cluster:207 [2022/08/02 12:08] hmeij07 |
||
---|---|---|---|
Line 7: | Line 7: | ||
Getting a head start on our new login node plus two cpu+gpu compute node project. Hardware has been purchased but there is long delivery time. Meanwhile it makes sense to setup a standalone Slurm scheduler and do some testing and have as a backup. Slurm will be running on '' | Getting a head start on our new login node plus two cpu+gpu compute node project. Hardware has been purchased but there is long delivery time. Meanwhile it makes sense to setup a standalone Slurm scheduler and do some testing and have as a backup. Slurm will be running on '' | ||
+ | This page just intended to keep documentation sources handy. Go to the **Users** page [[cluster: | ||
**SLURM documentation** | **SLURM documentation** | ||
Line 33: | Line 34: | ||
https:// | https:// | ||
section: node configuration | section: node configuration | ||
+ | |||
+ | The node range expression can contain one pair of square brackets with a sequence of comma-separated numbers and/or ranges of numbers separated by a " | ||
+ | |||
Features (hasGPU, hasRTX5000) | Features (hasGPU, hasRTX5000) | ||
are intended to be used to filter nodes eligible to run jobs via the --constraint argument. | are intended to be used to filter nodes eligible to run jobs via the --constraint argument. | ||
Line 49: | Line 53: | ||
https:// | https:// | ||
setting up gres.conf | setting up gres.conf | ||
+ | |||
+ | give GPU jobs priority using the Multifactor Priority plugin: | ||
+ | https:// | ||
+ | PriorityWeightTRES=GRES/ | ||
+ | example here: https:// | ||
+ | requires faishare thus the database | ||
https:// | https:// | ||
Line 173: | Line 183: | ||
** SLURM installation ** | ** SLURM installation ** | ||
+ | |||
+ | Configured and compiled on '' | ||
< | < | ||
- | #source / | + | # cuda 9.2 ... |
- | #export PATH=/ | + | # installer found / |
- | #export LD_LIBRARY_PATH=/ | + | |
- | #which mpirun python conda | + | |
- | + | ||
- | # cuda 9.2 ... configure finds / | + | |
- | #export CUDAHOME=/usr/local/n37-cuda-9.2 | + | |
- | #export PATH=/ | + | |
- | #export LD_LIBRARY_PATH=/ | + | |
- | #which nvcc | + | |
# just in case | # just in case | ||
Line 192: | Line 196: | ||
which mpirun | which mpirun | ||
- | # / | + | # / |
./configure \ | ./configure \ | ||
- | --prefix=/ | + | --prefix=/ |
- | --sysconfdir=/ | + | --sysconfdir=/ |
- | tee -a install.log | + | | tee -a install.log |
- | # not --with-nvml=/ | + | # skip # --with-nvml=/ |
- | # not -with-hdf5=no | + | # skip # -with-hdf5=no |
- | + | ||
# known hdf5 library problem when including --with-nvml | # known hdf5 library problem when including --with-nvml | ||
Line 206: | Line 208: | ||
config.status: | config.status: | ||
+ | ==== | ||
+ | Libraries have been installed in: | ||
+ | / | ||
+ | If you ever happen to want to link against installed libraries | ||
+ | in a given directory, LIBDIR, you must either use libtool, and | ||
+ | specify the full pathname of the library, or use the ' | ||
+ | flag during linking and do at least one of the following: | ||
+ | - add LIBDIR to the ' | ||
+ | | ||
+ | - add LIBDIR to the ' | ||
+ | | ||
+ | - use the ' | ||
+ | - have your system administrator add LIBDIR to '/ | ||
+ | ==== | ||
+ | |||
+ | # for now | ||
export PATH=/ | export PATH=/ | ||
export LD_LIBRARY_PATH=/ | export LD_LIBRARY_PATH=/ | ||
- | From job completions file, JOB #3 | + | </ |
+ | |||
+ | |||
+ | For **general accounting** we may rely on simple text file | ||
+ | |||
+ | < | ||
+ | |||
+ | From job completions file, JOB #3, convert Start and End times to epoch seconds | ||
StartTime=2021-10-06T14: | StartTime=2021-10-06T14: | ||
Line 226: | Line 251: | ||
**Full Version Slurm Config Tool** | **Full Version Slurm Config Tool** | ||
+ | |||
+ | * lets start with this file and build up/out | ||
< | < | ||
Line 375: | Line 402: | ||
# | # | ||
# COMPUTE NODES | # COMPUTE NODES | ||
- | NodeName=n[110-111] CPUs=2 RealMemory=192 CoresPerSocket=12 ThreadsPerCore=12 State=UNKNOWN | + | NodeName=n[110-111] CPUs=2 RealMemory=192 CoresPerSocket=12 ThreadsPerCore=2 State=UNKNOWN |
# | # | ||
# | # |