User Tools

Site Tools


cluster:125

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
cluster:125 [2014/02/17 18:13]
hmeij [What changes?]
cluster:125 [2014/02/19 14:14]
hmeij
Line 64: Line 64:
  
   * We'll schedule one as soon as ''mw256fd'' has been deployed.   * We'll schedule one as soon as ''mw256fd'' has been deployed.
 +
 +==== What May Chenage? ====
 +
 +There is a significant need to run many, many programs that require very little memory (like in the order of 1-5 MB).  When such programs run they consume a job slot.  When many such programs consume many job slots, like on the large servers in the ''mw256'' or ''mw256fd'' queues lots of memory remains idle and inaccessible by other programs.
 +
 +So we could enable hyperthreading on the nodes of the ''hp12'' queue and double the jobs slots (from 256 to 512).  Testing reveals that when hyperthreading is on 
 +
 +  * if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue)
 + - if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many
 +
 +So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16.
 +
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/125.txt · Last modified: 2014/02/26 20:32 by hmeij