This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:218 [2023/09/27 12:51] hmeij07 [Resources] |
cluster:218 [2023/10/14 19:24] (current) hmeij07 [Resources] |
||
---|---|---|---|
Line 124: | Line 124: | ||
** Pending Jobs ** | ** Pending Jobs ** | ||
- | I keep having to inform users that with -n 1 and -cpu 1 your can still go in pending state because user forgot to reserve memory so silly slurm assumes your jobs needs all the node's memory. Here is my template then | + | I keep having to inform users that with -n 1 and -cpu 1 your job can still go in pending state because user forgot to reserve memory |
< | < | ||
- | FirstName, your jobs are pending because you did not request memory and if not then slurm assumes you need all memory, silly. Command " | + | FirstName, your jobs are pending because you did not request memory |
+ | and if not then slurm assumes you need all memory, silly. | ||
+ | Command " | ||
JobId=1062052 JobName=3a_avgHbond_CPU | JobId=1062052 JobName=3a_avgHbond_CPU | ||
Line 134: | Line 136: | ||
| | ||
- | I looked (command "ssh n?? top -u username -b -n 1", look for the VIRT value) and you need less than 1G per job so with --mem=1024 and n=1 and cpu=1 you should be able to load 48 jobs onto n100. Consult output of command "sinfo -lN" | + | I looked (command "ssh n?? top -u username -b -n 1", look for the VIRT value) |
+ | and you need less than 1G per job so with --mem=1024 and n=1 and cpu=1 | ||
+ | you should be able to load 48 jobs onto n100. | ||
+ | Consult output of command "sinfo -lN" | ||
</ | </ |