User Tools

Site Tools


cluster:138

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:138 [2015/03/24 15:40]
hmeij
cluster:138 [2015/03/25 19:12]
hmeij
Line 6: Line 6:
 We are in need to address the problem of tens of thousands of small serial jobs swarming across our larger servers.  In doing so these jobs tie up large chunks of memory they do not use and interfere with the scheduling of large parallel jobs (small serial jobs satisfy job prerequisites easily). We are in need to address the problem of tens of thousands of small serial jobs swarming across our larger servers.  In doing so these jobs tie up large chunks of memory they do not use and interfere with the scheduling of large parallel jobs (small serial jobs satisfy job prerequisites easily).
  
-So the idea is to assess what $50K could buy us in terms of large core density hardware (max cpu cores per U rack space) with small memory footprints (defined as 1 gb per physical core or less).  Nodes can have tiny local disks for OS and local scratch (say 16-120 GB). ''/home'' may not be mounted on these systems so input and output files need to be managed by the jobs and copied back and forth using ''scp'' The scheduler will be SLURM. The OS CentOS 6.x latest version.+So the idea is to assess what we could buy in terms of large core density hardware (max cpu cores per U rack space) with small memory footprints (defined as 1 gb per physical core or less).  Nodes can have tiny local disks for OS and local scratch (say 16-120 GB). ''/home'' may not be mounted on these systems so input and output files need to be managed by the jobs and copied back and forth using ''scp'' The scheduler will be SLURM. The OS CentOS 6.x latest version.
  
 Some testing results can be found here: Some testing results can be found here:
Line 13: Line 13:
   * [[cluster:134|Slurm]]   * [[cluster:134|Slurm]]
  
 +The //expansion:// lines below give an estimation of nr_nodes = int(expansion_budget/node_cost)
 ==== ExxactCorp ==== ==== ExxactCorp ====
  
Line 26: Line 26:
     * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support     * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
   * 12 cores/U, 1.3 gb ram/core   * 12 cores/U, 1.3 gb ram/core
-  * 50K: 26 nodes, 26U, 312 cores+  * //expansion:// 26 nodes, 26U, 312 cores
  
   * Node Option B: Quantum IXR110-512N E5-2600 v3 family   * Node Option B: Quantum IXR110-512N E5-2600 v3 family
Line 38: Line 38:
     * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support     * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
   * 20 cores/U, 1.6 gb ram/core   * 20 cores/U, 1.6 gb ram/core
-  * 50K: 13 nodes, 13U, 260 cores+  * //expansion:// 13 nodes, 13U, 260 cores
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/138.txt ยท Last modified: 2016/06/21 18:11 by hmeij07