User Tools

Site Tools


cluster:138

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
cluster:138 [2015/03/24 15:13]
hmeij created
cluster:138 [2015/04/02 13:54]
hmeij [Microway]
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
-==== HPCC Expansion Summer 2015 ====+===== HPCC Expansion Summer 2015 =====
  
 We are in need to address the problem of tens of thousands of small serial jobs swarming across our larger servers.  In doing so these jobs tie up large chunks of memory they do not use and interfere with the scheduling of large parallel jobs (small serial jobs satisfy job prerequisites easily). We are in need to address the problem of tens of thousands of small serial jobs swarming across our larger servers.  In doing so these jobs tie up large chunks of memory they do not use and interfere with the scheduling of large parallel jobs (small serial jobs satisfy job prerequisites easily).
  
-So the idea is to assess what $50K could buy us in terms of large core density hardware (max cpu cores per U rack space) with small memory footprints (defined as 1 gb per physical core or less).  Nodes can have tiny local disks for OS and local scratch (say 16-120 GB). ''/home'' may not be mounted on these systems so input and output files need to be managed by the jobs and copied back and forth using ''scp'' The scheduler will be SLURM.+So the idea is to assess what we could buy in terms of large core density hardware (max cpu cores per U rack space) with small memory footprints (defined as 1 gb per physical core or less).  Nodes can have tiny local disks for OS and local scratch (say 16-120 GB). ''/home'' may not be mounted on these systems so input and output files need to be managed by the jobs and copied back and forth using ''scp'' The scheduler will be SLURM. The OS CentOS 6.x latest version.
  
 Some testing results can be found here: Some testing results can be found here:
  
   * [[cluster:133|High Core Count - Low Memory Footprint]]   * [[cluster:133|High Core Count - Low Memory Footprint]]
 +  * [[cluster:134|Slurm]]
  
 +The //expansion:// lines below give an estimation of nr_nodes = int(expansion_budget/node_cost)
  
 +==== ExxactCorp ====
 +
 +  * Node option A: Quantum IXR110-512N E5-2600 v2 family
 +    * 1U Server, Intel Dual socket R (LGA 2011)
 +    * Dual port Gigabit Ethernet
 +    * 350W High efficiency 1x
 +    * Intel® Xeon® processor E5-2620v2, 6C, 2.10 GHz 15M 2x (total 12 cores)
 +    * 8GB 240-Pin DDR3 1866 MHz ECC/Registered Server Memory 2x (total 16gb ram)
 +    * 120GB 2.5 SATA III Internal Solid State Drive (SSD) ** OS Drive and Scratch Drive ** 2x
 +    * CentOS 6 Installation 
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  * 12 cores/U, 1.3 gb ram/core
 +  * //expansion:// 26 nodes, 26U, 312 cores
 +
 +  * Node Option B: Quantum IXR110-512N E5-2600 v3 family
 +    * 1U Server, Intel Dual socket R (LGA 2011) 
 +    * Dual port Gigabit Ethernet, 
 +    * 480W High efficiency 1x
 +    * Intel® Xeon® processor E5-2650 v3, 10C, 2.3 GHz 25M, 2x (total 20 cores)
 +    * 16GB DDR4-2133MHz ECC Registered 1.2V Memory Module 2x (total 32gb ram)
 +    * 120GB 2.5 SATA III Internal Solid State Drive (SSD) ** OS Drive and Scratch Drive ** 2x
 +    * CentOS 6 Installation 
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  * 20 cores/U, 1.6 gb ram/core
 +  * //expansion:// 13 nodes, 13U, 260 cores
 +
 +==== Advanced Clustering ====
 +
 +  * Node option: Pinnacle 1FX3601
 +    * 2U Server Enclosure, 4 nodes per enclosure 4x, each node with:
 +    * Dual port Gigabit Ethernet
 +    * 500W High efficiency 1x
 +    * Intel® Xeon® processor E5-2630v3, 8C, 2.40 GHz 20M 2x (total 16 cores)
 +    * 4GB DDR4 2133 MHz ECC/Registered Server Memory 8x (total 32gb ram)
 +    * 128GB SATA Solid State Drive (SSD) 1x
 +    * <del>CentOS 6 Installation </del>
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  *  64 cores/2U, 2.0 gb ram/core
 +  * //expansion:// 16 nodes, 8U,  256 cores
 +
 +==== Microway ====
 +
 +  * Node option: Avoton MicroBlade (up to 28 MicroBlades)
 +    * 6U Enclosure, :
 +    * Two one port Gigabit Ethernet,internal 2x2.5 Ghz ehternet module
 +    * 1600W High efficiency 200-240V
 +    * 19 MicroBlades, each with 4 independent nodes and, each
 +    * Intel® Atom processor C2750, 8C, 2.40 GHz 1x (total 32 cores/blade) <--- new, see below
 +    * 8GB DDR3 1600 MHz ECC/Unbuffered 1x (total 32gb ram/blade)
 +    * 16GB SATA Solid State Drive (SSD) 1x  <--- important
 +    * CentOS 6, Slurm, OpenMPI, GNU compilers Installation (MCMS - Mircroway provisioning tool)
 +    * MicroBlade Chassis Management Module (virtual media - usb/cdrom)
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  *  608 cores/6U, 1.0 gb ram/core
 +  * //expansion:// 76 nodes, 6U,  608 cores
 +
 +  * [[http://www.servethehome.com/Server-detail/intel-atom-c2750-8-core-avoton-rangeley-benchmarks-fast-power/]]
 +    * Low overall power consumption, performance generally around half of the Intel Xeon E3-1200 V3 series
 +  * [[http://ark.intel.com/compare/65732,83356,75789,77987]]
 +    * compare xeon E5 v2, v3 and atom c2750
 +
 +==== Microway ====
 +
 +  * Node option: Avoton MicroBlade (up to 28 MicroBlades)
 +    * 6U Enclosure, :
 +    * Two one port Gigabit Ethernet,internal 2x2.5 Ghz ehternet module
 +    * 1600W High efficiency 200-240V
 +    * 19 MicroBlades, each with 4 independent nodes and, each
 +    * Intel® Atom processor C2750, 8C, 2.40 GHz 1x (total 32 cores/blade) <--- new, see below
 +    * 8GB DDR3 1600 MHz ECC/Unbuffered 1x (total 32gb ram/blade)
 +    * 16GB SATA Solid State Drive (SSD) 1x  <--- important
 +    * CentOS 6, Slurm, OpenMPI, GNU compilers Installation (MCMS - Mircroway provisioning tool)
 +    * MicroBlade Chassis Management Module (virtual media - usb/cdrom)
 +    * 3-Year Warranty on Parts and Labor with Perpetual Email and Telephone Support
 +  *  608 cores/6U, 1.0 gb ram/core
 +  * //expansion:// 76 nodes, 6U,  608 cores
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/138.txt · Last modified: 2016/06/21 18:11 by hmeij07