User Tools

Site Tools


cluster:110

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:110 [2013/04/12 18:33]
hmeij
cluster:110 [2013/04/15 17:29]
hmeij [Specs: EC GPU]
Line 26: Line 26:
  
 ^  Topic^Description  ^ ^  Topic^Description  ^
-|  General| 10 CPUs (80 cores), 20 GPUs (45,000 cuda cores), 128 gb ram/node, plus head node (128gb)|+|  General| 12 CPUs (96 cores), 20 GPUs (45,000 cuda cores), 128 gb ram/node, plus head node (128gb)|
 |  Head Node|1x2U Rackmount System, 2xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores| |  Head Node|1x2U Rackmount System, 2xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores|
 |  |8x16GB 240-Pin DDR3 1600 MHz ECC (128gb, max 512gb), 2x10/100/1000 NIC, 1x PCIe x16 Full, 6x PCIe x8 Full| |  |8x16GB 240-Pin DDR3 1600 MHz ECC (128gb, max 512gb), 2x10/100/1000 NIC, 1x PCIe x16 Full, 6x PCIe x8 Full|
 |  |2x2TB RAID1 7200RPM (can hold 10), ConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s| |  |2x2TB RAID1 7200RPM (can hold 10), ConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s|
 |  |1920w Power Supply, redundant| |  |1920w Power Supply, redundant|
-|  Nodes|5x2U Rackmountable Chassis, 5x2 Xeon E5-2660 2.20 Ghz 20MB Cache 8 cores (16/node), Sandy Bridge series| +|  Nodes|6x2U Rackmountable Chassis, 6x2 Xeon E5-2660 2.20 Ghz 20MB Cache 8 cores (16/node), Sandy Bridge series| 
-|  |40x16GB 240-Pin DDR3 1600 MHz (128gb/node memory, 8gb/core, max 256gb)| +|  |48x16GB 240-Pin DDR3 1600 MHz (128gb/node memory, 8gb/core, max 256gb)| 
-|  |5x1TB 7200RPM, 4x4xNVIDIA Tesla K20 8 GB GPUs (4/node), 1CPU-2GPU ratio|+|  |6x1TB 7200RPM, 5x4xNVIDIA Tesla K20 8 GB GPUs (4/node), 1CPU-2GPU ratio|
 |  |2x10/100/1000 NIC, Dedicated IPMI Port, 4x PCIE 3.0 x16 Slots| |  |2x10/100/1000 NIC, Dedicated IPMI Port, 4x PCIE 3.0 x16 Slots|
-|  |5xConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s| +|  |6xConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s| 
-|  |5x1800W Redundant Power Supplies|+|  |6x1800W Redundant Power Supplies|
 |  Network|1x Mellanox InfiniBand QDR Switch (18 ports)& HCAs (single port) +9x7' cables (2 uplink cables)| |  Network|1x Mellanox InfiniBand QDR Switch (18 ports)& HCAs (single port) +9x7' cables (2 uplink cables)|
 |  |1x 1U 16 Port Rackmount Switch, 10/100/1000, Unmanaged (+ 7' cables)| |  |1x 1U 16 Port Rackmount Switch, 10/100/1000, Unmanaged (+ 7' cables)|
Line 42: Line 42:
 |  Software| CentOS, Bright Cluster Management (1 year support)| |  Software| CentOS, Bright Cluster Management (1 year support)|
 |  | Amber12 (cluster install), Lammps (shared filesystem), (no NAMD)| |  | Amber12 (cluster install), Lammps (shared filesystem), (no NAMD)|
-|  Storage|  3U 52TB Disk Array (28x2TB) Raid 6, cascade cable|+|  Storage|3U 52TB Disk Array (28x2TB) Raid 6, cascade cable|
 |  Warranty|3 Year Parts and Labor (EC technical support?)|  |  Warranty|3 Year Parts and Labor (EC technical support?)| 
 |  GPU Teraflops|23.40 double, 70.40 single| |  GPU Teraflops|23.40 double, 70.40 single|
-|  Quote|<html><!-- $117,275 incl $800 S&H --></html>Arrived|+|  Quote|<html><!-- $124,372 incl $800 S&H --></html>Arrived|
  
  
   * 5 GPU shelves   * 5 GPU shelves
 +  * 1 CPU shelf
   * 4 PDUs!   * 4 PDUs!
   * 56TB raw   * 56TB raw
 +  * LSI hardware raid card
  
  
cluster/110.txt · Last modified: 2013/05/24 13:39 by hmeij