User Tools

Site Tools


cluster:125

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:125 [2014/02/19 14:14]
hmeij
cluster:125 [2014/02/21 14:55]
hmeij
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
-==== Dell Racks power off ====+Done! 
 + --- //[[hmeij@wesleyan.edu|Meij, Henk]] 2014/02/21 09:54// 
 + 
 +==== Dell Racks Power Off ====
  
 Soon (Feb/2014), we'll have to power down the Dell Racks and grab one L6-30 circuit supplying power to those racks and use it to power up the new Microway servers. Soon (Feb/2014), we'll have to power down the Dell Racks and grab one L6-30 circuit supplying power to those racks and use it to power up the new Microway servers.
Line 16: Line 19:
   * Each node is Infiniband enabled (meaning all our nodes are except the Blue Sky Studio, queue ''bss24''). ''/home'' and ''/sanscratch'' are served IPoIB.   * Each node is Infiniband enabled (meaning all our nodes are except the Blue Sky Studio, queue ''bss24''). ''/home'' and ''/sanscratch'' are served IPoIB.
  
-==== What changes? ====+==== What Changes? ====
  
 Queues: Queues:
  
   * elw, emw, ehw, ehwfd and imw disappear (224 job slots)   * elw, emw, ehw, ehwfd and imw disappear (224 job slots)
-  * mw256fd appears +  * mw256fd appears (256 job slots) 
-  * on both mw256 (n33-n37) and mw256fd (n38-n45) exclusive use is disabled (#BSUB -x)+  * on both mw256 (n33-n37) and mw256fd (n38-n45) exclusive use is disabled (#BSUB -x will not work)
   * the max number of jobs slots per node is 32 on ''mw256fd'' but 28 on ''mw256'' because the GPUs also need access to cores (4 per node for now) ... for now, it may be that max is going to be set to 8 if too many jobs grab too many job slots. You should benchmark your job to understand what is optimal.   * the max number of jobs slots per node is 32 on ''mw256fd'' but 28 on ''mw256'' because the GPUs also need access to cores (4 per node for now) ... for now, it may be that max is going to be set to 8 if too many jobs grab too many job slots. You should benchmark your job to understand what is optimal.
  
Line 63: Line 66:
 Workshop: Workshop:
  
-  * We'll schedule one as soon as ''mw256fd'' has been deployed.+  * We'll schedule one as soon as ''mw256fd'' has been deployed. Feb 26th ST 509a 4-5 PM.
  
-==== What May Chenage? ====+==== What May Also Change? ====
  
 There is a significant need to run many, many programs that require very little memory (like in the order of 1-5 MB).  When such programs run they consume a job slot.  When many such programs consume many job slots, like on the large servers in the ''mw256'' or ''mw256fd'' queues lots of memory remains idle and inaccessible by other programs. There is a significant need to run many, many programs that require very little memory (like in the order of 1-5 MB).  When such programs run they consume a job slot.  When many such programs consume many job slots, like on the large servers in the ''mw256'' or ''mw256fd'' queues lots of memory remains idle and inaccessible by other programs.
cluster/125.txt · Last modified: 2014/02/26 20:32 by hmeij