This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
cluster:218 [2022/09/08 08:51] hmeij07 [Resources] |
cluster:218 [2023/09/27 08:52] hmeij07 [Resources] |
||
---|---|---|---|
Line 121: | Line 121: | ||
</ | </ | ||
+ | |||
+ | ** Pending Jobs ** | ||
+ | |||
+ | I keep having to inform users that with -n 1 and -cpu 1 your can still go in pending state because user forgot to reserve memory so silly slurm assumes your jobs needs all the node's memory. Here is my template then | ||
+ | |||
+ | < | ||
+ | |||
+ | FirstName, your jobs are pending because you did not request memory | ||
+ | and if not then slurm assumes you need all memory, silly. | ||
+ | Command " | ||
+ | |||
+ | JobId=1062052 JobName=3a_avgHbond_CPU | ||
+ | | ||
+ | | ||
+ | |||
+ | I looked (command "ssh n?? top -u username -b -n 1", look for the VIRT value) | ||
+ | and you need less than 1G per job so with --mem=1024 and n=1 and cpu=1 | ||
+ | you should be able to load 48 jobs onto n100. | ||
+ | Consult output of command "sinfo -lN" | ||
+ | |||
+ | </ | ||
+ | |||
==== MPI ==== | ==== MPI ==== |