This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:125 [2014/02/19 14:14] hmeij |
cluster:125 [2014/02/26 20:32] (current) hmeij [What Changes?] |
||
---|---|---|---|
Line 2: | Line 2: | ||
**[[cluster: | **[[cluster: | ||
- | ==== Dell Racks power off ==== | + | Done! |
+ | --- // | ||
+ | |||
+ | ==== Dell Racks Power Off ==== | ||
Soon (Feb/2014), we'll have to power down the Dell Racks and grab one L6-30 circuit supplying power to those racks and use it to power up the new Microway servers. | Soon (Feb/2014), we'll have to power down the Dell Racks and grab one L6-30 circuit supplying power to those racks and use it to power up the new Microway servers. | ||
Line 16: | Line 19: | ||
* Each node is Infiniband enabled (meaning all our nodes are except the Blue Sky Studio, queue '' | * Each node is Infiniband enabled (meaning all our nodes are except the Blue Sky Studio, queue '' | ||
- | ==== What changes? ==== | + | ==== What Changes? ==== |
Queues: | Queues: | ||
* elw, emw, ehw, ehwfd and imw disappear (224 job slots) | * elw, emw, ehw, ehwfd and imw disappear (224 job slots) | ||
- | * mw256fd appears | + | * mw256fd appears |
- | * on both mw256 (n33-n37) and mw256fd (n38-n45) exclusive use is disabled (#BSUB -x) | + | * on both mw256 (n33-n37) and mw256fd (n38-n45) exclusive use is disabled (#BSUB -x will not work) |
* the max number of jobs slots per node is 32 on '' | * the max number of jobs slots per node is 32 on '' | ||
Line 32: | Line 35: | ||
#BSUB -R " | #BSUB -R " | ||
</ | </ | ||
+ | |||
+ | * How do I find out how much memory I'm using? ssh node_name top -u your_name -b -n 1 | ||
Gaussian: | Gaussian: | ||
Line 48: | Line 53: | ||
* You can use the new queue '' | * You can use the new queue '' | ||
* For parallel programs you may use OpenMPI or MVApich, use the appropriate wrapper scripts to set up the environment for mpirun | * For parallel programs you may use OpenMPI or MVApich, use the appropriate wrapper scripts to set up the environment for mpirun | ||
+ | * On '' | ||
* On '' | * On '' | ||
- | * On '' | + | |
Scratch: | Scratch: | ||
Line 63: | Line 69: | ||
Workshop: | Workshop: | ||
- | * We'll schedule one as soon as '' | + | * We'll schedule one as soon as '' |
- | ==== What May Chenage? ==== | + | ==== What May Also Change? ==== |
There is a significant need to run many, many programs that require very little memory (like in the order of 1-5 MB). When such programs run they consume a job slot. When many such programs consume many job slots, like on the large servers in the '' | There is a significant need to run many, many programs that require very little memory (like in the order of 1-5 MB). When such programs run they consume a job slot. When many such programs consume many job slots, like on the large servers in the '' | ||
Line 72: | Line 78: | ||
* if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue) | * if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue) | ||
- | | + | * if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many |
So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16. | So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16. |