This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
cluster:125 [2014/02/25 14:25] hmeij [What May Also Change?] |
cluster:125 [2014/02/26 15:30] hmeij |
||
---|---|---|---|
Line 35: | Line 35: | ||
#BSUB -R " | #BSUB -R " | ||
</ | </ | ||
+ | |||
+ | * How do I find out how much memory I'm using? ssh node_name top -u your_name -b -n 1 | ||
Gaussian: | Gaussian: | ||
Line 75: | Line 77: | ||
* if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue) | * if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue) | ||
- | if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many | + | * if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many |
So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16. | So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16. |