User Tools

Site Tools


cluster:138

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:138 [2015/03/24 15:15]
hmeij
cluster:138 [2015/03/26 20:01]
hmeij
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
-==== HPCC Expansion Summer 2015 ====+===== HPCC Expansion Summer 2015 =====
  
 We are in need to address the problem of tens of thousands of small serial jobs swarming across our larger servers.  In doing so these jobs tie up large chunks of memory they do not use and interfere with the scheduling of large parallel jobs (small serial jobs satisfy job prerequisites easily). We are in need to address the problem of tens of thousands of small serial jobs swarming across our larger servers.  In doing so these jobs tie up large chunks of memory they do not use and interfere with the scheduling of large parallel jobs (small serial jobs satisfy job prerequisites easily).
  
-So the idea is to assess what $50K could buy us in terms of large core density hardware (max cpu cores per U rack space) with small memory footprints (defined as 1 gb per physical core or less).  Nodes can have tiny local disks for OS and local scratch (say 16-120 GB). ''/home'' may not be mounted on these systems so input and output files need to be managed by the jobs and copied back and forth using ''scp'' The scheduler will be SLURM. The OS CentOS 6.x latest version.+So the idea is to assess what we could buy in terms of large core density hardware (max cpu cores per U rack space) with small memory footprints (defined as 1 gb per physical core or less).  Nodes can have tiny local disks for OS and local scratch (say 16-120 GB). ''/home'' may not be mounted on these systems so input and output files need to be managed by the jobs and copied back and forth using ''scp'' The scheduler will be SLURM. The OS CentOS 6.x latest version.
  
 Some testing results can be found here: Some testing results can be found here:
Line 13: Line 13:
   * [[cluster:134|Slurm]]   * [[cluster:134|Slurm]]
  
 +The //expansion:// lines below give an estimation of nr_nodes = int(expansion_budget/node_cost)
 +
 +==== ExxactCorp ====
 +
 +  * Node option A: Quantum IXR110-512N E5-2600 v2 family
 +    * 1U Server, Intel Dual socket R (LGA 2011)
 +    * Dual port Gigabit Ethernet
 +    * 350W High efficiency 1x
 +    * Intel® Xeon® processor E5-2620v2, 6C, 2.10 GHz 15M 2x (total 12 cores)
 +    * 8GB 240-Pin DDR3 1866 MHz ECC/Registered Server Memory 2x (total 16gb ram)
 +    * 120GB 2.5 SATA III Internal Solid State Drive (SSD) ** OS Drive and Scratch Drive ** 2x
 +    * CentOS 6 Installation 
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  * 12 cores/U, 1.3 gb ram/core
 +  * //expansion:// 26 nodes, 26U, 312 cores
 +
 +  * Node Option B: Quantum IXR110-512N E5-2600 v3 family
 +    * 1U Server, Intel Dual socket R (LGA 2011) 
 +    * Dual port Gigabit Ethernet, 
 +    * 480W High efficiency 1x
 +    * Intel® Xeon® processor E5-2650 v3, 10C, 2.3 GHz 25M, 2x (total 20 cores)
 +    * 16GB DDR4-2133MHz ECC Registered 1.2V Memory Module 2x (total 32gb ram)
 +    * 120GB 2.5 SATA III Internal Solid State Drive (SSD) ** OS Drive and Scratch Drive ** 2x
 +    * CentOS 6 Installation 
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  * 20 cores/U, 1.6 gb ram/core
 +  * //expansion:// 13 nodes, 13U, 260 cores
 +
 +==== Advanced Clustering ====
 +
 +  * Node option: Pinnacle 1FX3601
 +    * 2U Server Enclosure, 4 nodes per enclosure 4x, each node with:
 +    * Dual port Gigabit Ethernet
 +    * 500W High efficiency 1x
 +    * Intel® Xeon® processor E5-2630v3, 8C, 2.40 GHz 20M 2x (total 16 cores)
 +    * 4GB DDR4 2133 MHz ECC/Registered Server Memory 8x (total 32gb ram)
 +    * 128GB SATA Solid State Drive (SSD) 1x
 +    * <del>CentOS 6 Installation </del>
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  *  64 cores/2U, 2.0 gb ram/core
 +  * //expansion:// 16 nodes, 8U,  256 cores
  
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/138.txt · Last modified: 2016/06/21 18:11 by hmeij07