This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:125 [2014/02/25 19:25] hmeij [What May Also Change?] |
cluster:125 [2014/02/26 20:32] (current) hmeij [What Changes?] |
||
---|---|---|---|
Line 35: | Line 35: | ||
#BSUB -R " | #BSUB -R " | ||
</ | </ | ||
+ | |||
+ | * How do I find out how much memory I'm using? ssh node_name top -u your_name -b -n 1 | ||
Gaussian: | Gaussian: | ||
Line 51: | Line 53: | ||
* You can use the new queue '' | * You can use the new queue '' | ||
* For parallel programs you may use OpenMPI or MVApich, use the appropriate wrapper scripts to set up the environment for mpirun | * For parallel programs you may use OpenMPI or MVApich, use the appropriate wrapper scripts to set up the environment for mpirun | ||
+ | * On '' | ||
* On '' | * On '' | ||
- | * On '' | + | |
Scratch: | Scratch: | ||
Line 75: | Line 78: | ||
* if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue) | * if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue) | ||
- | if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many | + | * if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many |
So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16. | So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16. |