This is what we ended up buying May 2013.
Topic | Description |
---|---|
General | 10 CPUs (80 cores), 20 GPUs (45,000 cuda cores), 256 gb ram/node (1,280 gb total), plus head node (128 gb) |
Head Node | 1x42U Rackmount System (36 drive bays), 2xXeon E5-2660 2.0 Ghz 20MB Cache 8 cores (total 16 cores) |
16x16GB 240-Pin DDR3 1600 MHz ECC (total 256gb, max 512gb), ?x10/100/1000 NIC (3 cables), 3x PCIe x16 Full, 3x PCIe x8 | |
2x1TB 7200RPM (Raid 1) + 16x3TB (Raid 6), Areca Raid Controller | |
Low profile graphics card, ConnectX-3 VPI adapter card, Single-Port, FDR 56Gb/s (1 cable) | |
1400w Power Supply 1+1 redundant | |
Nodes | 5x 2U Rackmountable Chassis, 5x 2 Xeon E5-2660 2.0 Ghz 20MB Cache 8 cores (16 cores/node), Sandy Bridge series |
5x 16x16GB 240-Pin DDR3 1600 MHz (256gb/node memory, max 256gb) | |
5x 1x120GB SSD 7200RPM, 5x 4xNVIDIA Tesla K20 5 GB GPUs (4/node), 1CPU-2GPU ratio | |
?x10/100/1000 NIC (1 cable), Dedicated IPMI Port, 5x 4 PCIE 3.0 x16 Slots, 5x 8 PCIE 3,0 x8 Slots | |
5xConnectX-3 VPI adapter card, Single-Port, QDRFDR 40/56 Gb/s (1 cable) | |
5x1620W 1+1 Redundant Power Supplies | |
Network | 1x 1U Mellanox InfiniBand QDR Switch (18 ports)& HCAs (single port) + 3m cable QDR to existing Voltaire switch |
1x 1U 24 Port Rackmount Switch, 10/100/1000, Unmanaged (cables) | |
Rack | 1x42U rack with power distributions (14U used) |
Power | 2xPDU, Basic rack, 30A, 208V, Requires 1x L6-30 Power Outlet Per PDU (NEMA L6-30P) |
Software | CentOS, Bright Cluster Management (1 year support), MVAPich, OpenMPI, CUDA |
scheduler and gnu compilers installed and configured | |
Amber12 (customer provide license) , Lammps, NAMD, Cuda 4.2 (for apps) & 5 | |
Warranty | 3 Year Parts and Labor (lifetime technical support) |
GPU Teraflops | 23.40 double, 70.40 single |
Quote | <!-- estimated at $124,845 --> Arrived, includes S&H and Insurance |
Includes | Cluster pre-installation service |
Topic | Description |
---|---|
General | 12 CPUs (96 cores), 20 GPUs (45,000 cuda cores), 128 gb ram/node (640 gb total), plus head node (128gb) |
Head Node | 1x2U Rackmount System, 2xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores |
8x16GB 240-Pin DDR3 1600 MHz ECC (128gb, max 512gb), 2×10/100/1000 NIC, 1x PCIe x16 Full, 6x PCIe x8 Full | |
2x2TB RAID1 7200RPM (can hold 10), ConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
1920w Power Supply, redundant | |
Nodes | 6x2U Rackmountable Chassis, 6×2 Xeon E5-2660 2.20 Ghz 20MB Cache 8 cores (16/node), Sandy Bridge series |
48x16GB 240-Pin DDR3 1600 MHz (128gb/node memory, 8gb/core, max 256gb) | |
6x1TB 7200RPM, 5x4xNVIDIA Tesla K20 8 GB GPUs (4/node), 1CPU-2GPU ratio | |
2×10/100/1000 NIC, Dedicated IPMI Port, 4x PCIE 3.0 x16 Slots | |
6xConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
6x1800W Redundant Power Supplies | |
Network | 1x Mellanox InfiniBand QDR Switch (18 ports)& HCAs (single port) +9×7' cables (2 uplink cables) |
1x 1U 16 Port Rackmount Switch, 10/100/1000, Unmanaged (+ 7' cables) | |
Rack & Power | 42U, 4xPDU, Basic, 1U, 30A, 208V, (10) C13, Requires 1x L6-30 Power Outlet Per PDU |
Software | CentOS, Bright Cluster Management (1 year support) |
Amber12 (cluster install), Lammps (shared filesystem), (no NAMD) | |
Storage | 3U 52TB Disk Array (28x2TB) Raid 6, cascade cable |
Warranty | 3 Year Parts and Labor (EC technical support?) |
GPU Teraflops | 23.40 double, 70.40 single |
Quote | <!-- $124,372 incl $800 S&H --> Arrived |
Topic | Description |
---|---|
General | 8 CPUs (64 cores), 16 GPUs (40,000 cuda cores), 128 gb ram/node, plus head node |
Head Node | 1x4U Rackmount System (36 drive bays), 2xXeon E5-2660 2.0 Ghz 20MB Cache 8 cores (total 16 cores) |
16x16GB 240-Pin DDR3 1600 MHz ECC (total 256gb, max 512gb), ?x10/100/1000 NIC (3 cables), 3x PCIe x16 Full, 3x PCIe x8 | |
2x1TB 7200RPM (Raid 1) + 16x3TB (Raid 6), Areca Raid Controller | |
Low profile graphics card, ConnectX-3 VPI adapter card, Single-Port, FDR 56Gb/s (1 cable) | |
1400w Power Supply 1+1 redundant | |
Nodes | 4x 2U Rackmountable Chassis, 4x 2 Xeon E5-2660 2.0 Ghz 20MB Cache 8 cores (16 cores/node), Sandy Bridge series |
4x 8x16GB 240-Pin DDR3 1600 MHz (128gb/node memory, max 256gb) | |
4x 1x120GB SSD 7200RPM, 4x 4xNVIDIA Tesla K20 5 GB GPUs (4/node), 1CPU-2GPU ratio | |
?x10/100/1000 NIC (1 cable), Dedicated IPMI Port, 4x 4 PCIE 3.0 x16 Slots, 4x 8 PCIE 3,0 x8 Slots | |
4xConnectX-3 VPI adapter card, Single-Port, QDRFDR 40/56 Gb/s (1 cable) | |
4x1620W 1+1 Redundant Power Supplies | |
Network | 1x 1U Mellanox InfiniBand QDR Switch (18 ports)& HCAs (single port) + 3m cable QDR to existing Voltaire switch |
1x 1U 24 Port Rackmount Switch, 10/100/1000, Unmanaged (cables) | |
Rack | 1x42U rack with power distributions (14U used) |
Power | 2xPDU, Basic rack, 30A, 208V, Requires 1x L6-30 Power Outlet Per PDU (NEMA L6-30P) |
Software | CentOS, Bright Cluster Management (1 year support), MVAPich, OpenMPI, CUDA |
scheduler and gnu compilers installed and configured | |
Amber12 (customer provide license) , Lammps, NAMD, Cuda 4.2 (for apps) & 5 | |
Warranty | 3 Year Parts and Labor (lifetime technical support) |
GPU Teraflops | 18.72 double, 56.32 single |
Quote | <!-- estimated at $106,605 --> Arrived, includes S&H and Insurance |
Includes | Cluster pre-installation service |
Topic | Description |
---|---|
General | 13 nodes, 26 CPUs (208 cores), 128 gb ram/node (total 1,664 gb), plus head node (256gb) |
Head Node | 1x4U Rackmount System (36 drive bays), 2xXeon E5-2660 2.0 Ghz 20MB Cache 8 cores (total 16 cores) |
16x16GB 240-Pin DDR3 1600 MHz ECC (total 256gb, max 512gb), ?x10/100/1000 NIC (3 cables), 3x PCIe x16 Full, 3x PCIe x8 | |
2x1TB 7200RPM (Raid 1) + 16x3TB (Raid 6), Areca Raid Controller | |
Low profile graphics card, ConnectX-3 VPI adapter card, Single-Port, FDR 56Gb/s (1 cable) | |
1400w Power Supply 1+1 redundant | |
Nodes | 13x 2U Rackmountable Chassis, 13x 2 Xeon E5-2660 2.0 Ghz 20MB Cache 8 cores (16 cores/node), Sandy Bridge series |
13x 8x16GB 240-Pin DDR3 1600 MHz (128gb/node memory, max 256gb) | |
13x 1x120GB SSD 7200RPM | |
?x10/100/1000 NIC (1 cable), Dedicated IPMI Port, 4x 4 PCIE 3.0 x16 Slots, 4x 8 PCIE 3,0 x8 Slots | |
13xConnectX-3 VPI adapter card, Single-Port, QDRFDR 40/56 Gb/s (1 cable) | |
13x600W non Redundant Power Supplies | |
Network | 1x 1U Mellanox InfiniBand QDR Switch (18 ports)& HCAs (single port) + 3m cable QDR to existing Voltaire switch |
1x 1U 24 Port Rackmount Switch, 10/100/1000, Unmanaged (cables) | |
Rack | 1x42U rack with power distributions (14U used) |
Power | 2xPDU, Basic rack, 30A, 208V, Requires 1x L6-30 Power Outlet Per PDU (NEMA L6-30P) |
Software | CentOS, Bright Cluster Management (1 year support), MVAPich, OpenMPI, CUDA |
scheduler and gnu compilers installed and configured | |
Amber12 (customer provide license) , Lammps, NAMD, Cuda 4.2 (for apps) & 5 | |
Warranty | 3 Year Parts and Labor (lifetime technical support) |
Quote | <!-- estimated at $104,035 --> Arrived, includes S&H and Insurance |
Includes | Cluster pre-installation service |
Topic | Description |
---|---|
General | 8 CPUs (64 cores), 16 GPUs (40,000 cuda cores), 128 gb ram/node, plus head node (256gb) |
Head Node | 1x2U Rackmount System, 2xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores |
16x16GB 240-Pin DDR3 1600 MHz ECC (max 512gb), 2×10/100/1000 NIC, 1x PCIe x16 Full, 6x PCIe x8 Full | |
2x2TB RAID1 7200RPM, 8x2TB RAID6 7200RPM (can hold 10), ConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
1920w Power Supply, redundant | |
Nodes | 4x2U Rackmountable Chassis, 4×2 Xeon E5-2660 2.20 Ghz 20MB Cache 8 cores (16/node), Romley series |
32x16GB 240-Pin DDR3 1600 MHz (128gb/node memory, 32gb/gpu, max 256gb) | |
4x1TB 7200RPM, 4x4xNVIDIA Tesla K20 8 GB GPUs (4/node), 1CPU-2GPU ratio | |
2×10/100/1000 NIC, Dedicated IPMI Port, 4x PCIE 3.0 x16 Slots | |
4xConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
4x1800W Redundant Power Supplies | |
Network | 1x Mellanox InfiniBand QDR Switch (8 ports)& HCAs (single port) + 7×7' cables (2 uplink cables) |
1x 1U 16 Port Rackmount Switch, 10/100/1000, Unmanaged (+ 7' cables) | |
Rack & Power | 42U, 2xPDU, Basic, 1U, 30A, 208V, (10) C13, Requires 1x L6-30 Power Outlet Per PDU |
Software | CentOS, Bright Cluster Management (1 year support) |
Amber12 (cluster install), Lammps (shared filesystem), (no NAMD) | |
Warranty | 3 Year Parts and Labor (EC technical support?) |
GPU Teraflops | 18.72 double, 56.32 single |
Quote | <!-- $103,150 incl $800 S&H --> Arrived |
Topic | Description |
---|---|
General | 13 nodes, 26 CPUs (208 cores), 128 gb ram/node (total 1,664 gb), plus head node (256gb) |
Head Node | 1x2U Rackmount System, 2xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores |
16x16GB 240-Pin DDR3 1600 MHz ECC (max 512gb), 2×10/100/1000 NIC, 1x PCIe x16 Full, 6x PCIe x8 Full | |
2x2TB RAID1 7200RPM, 8x2TB RAID6 7200RPM (can hold 10), ConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
1920w Power Supply, redundant | |
Nodes | 13x1U Rackmountable Chassis, 13×2 Xeon E5-2660 2.20 Ghz 20MB Cache 8 cores (16/node), Romley series |
104x16GB 240-Pin DDR3 1600 MHz (128gb/node memory, max ???gb) | |
13x1TB 7200RPM | |
2×10/100/1000 NIC, Dedicated IPMI Port, 1x PCIE 3.0 x16 Slots | |
13xConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
13x480W non Redundant Power Supplies | |
Network | 1x Mellanox InfiniBand QDR Switch (18 ports)& HCAs (single port) + 7×7' cables (2 uplink cables) |
1x 1U 24 Port Rackmount Switch, 10/100/1000, Unmanaged (+ 7' cables) | |
Rack & Power | 42U, 2xPDU, Basic, 1U, 30A, 208V, (10) C13, Requires 1x L6-30 Power Outlet Per PDU |
Software | CentOS, Bright Cluster Management (1 year support) |
Amber12 (cluster install), Lammps (shared filesystem), NAMD | |
Warranty | 3 Year Parts and Labor (EC technical support?) |
Quote | <!-- $105,770 incl $800 S&H --> Arrived |
09nov12:
AC Specs
Topic | Description |
---|---|
General | 2 CPUs (16 cores), 3 GPUs ( 7,500 cuda cores), 32 gb ram/node |
Head Node | None |
Nodes | 1x4U Rackmountable Chassis, 2xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores (16cores/node), Romley series |
8x4GB 240-Pin DDR3 1600 MHz memory (32gb/node), 11gb/gpu, max 256gb) | |
1x120GB SATA 2.5“ Solid State Drive (OS drive), 7x3TB 7200RPM | |
3xNVIDIA Tesla K20 8 GB GPUs (3/node), 1CPU-1.5GPU ratio | |
2×10/100/1000 NIC, 3x PCIE 3.0 x16 Slots | |
1xConnectX-3 VPI adapter card, single-port 56Gb/s | |
2x1620W Redundant Power Supplies | |
Network | 1×36 port Infiniband FDR (56Gb/s) switch & 4xConnectX-3 single port FDR (56Gb/s) IB adapter + 2x 2 meter cables (should be 4) |
Power | Rack power ready |
Software | None |
Warranty | 3 Year Parts and Labor (AC technical support) |
GPU Teraflops | 3.51 double, 10.56 single |
Quote | <!-- $33,067.43 S&H included --> Arrived |
12nov12:
EC Specs
Topic | Description |
---|---|
General | 8 CPUs (64 cores), 16 GPUs (40,000 cuda cores), 64 gb ram/node, plus head node |
Head Node | 1x1U Rackmount System, 2xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores |
8x8GB 240-Pin DDR3 1600 MHz ECC (max 256gb), 2×10/100/1000 NIC, 2x PCIe x16 Full | |
2x2TB 7200RPM (can hold 10), ConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
600w Power Supply | |
Nodes | 4x2U Rackmountable Chassis, 8xXeon E5-2660 2.20 Ghz 20MB Cache 8 cores (16/node), Romley series |
32x8GB 240-Pin DDR3 1600 MHz (64gb/node memory, 16gb/gpu, max 256gb) | |
4x1TB 7200RPM, 16xNVIDIA Tesla K20 8 GB GPUs (4/node), 1CPU-2GPU ratio | |
2×10/100/1000 NIC, Dedicated IPMI Port, 4x PCIE 3.0 x16 Slots | |
4xConnectX-2 VPI adapter card, Single-Port, QDR 40Gb/s | |
4x1800W Redundant Power Supplies | |
Network | 1x Mellanox InfiniBand QDR Switch (8 ports)& HCAs (single port) + 7' cables |
1x 1U 16 Port Rackmount Switch, 10/100/1000, Unmanaged (+ 7' cables) | |
Power | 2xPDU, Basic, 1U, 30A, 208V, (10) C13, Requires 1x L6-30 Power Outlet Per PDU |
Software | CentOS, Bright Cluster Management (1 year support) |
Amber12 (cluster install), Lammps (shared filesystem), (Barracuda for weirlab?) | |
Warranty | 3 Year Parts and Labor (EC technical support?) |
GPU Teraflops | 18.72 double, 56.32 single |
Quote | <!-- $93,600 + S&H --> Arrived |
HP 19nov12: meeting notes
HP Specs
http://h18004.www1.hp.com/products/quickspecs/14405_div/14405_div.HTML
Topic | Description |
---|---|
General | 6 CPUs (total 48 cores), 18 GPUs (45,000 cuda cores), 64 gb ram/node, no head node |
Head Node | None |
Chassis | 2xs6500 Chassis (4U) can each hold 2 half-width SL250s(gen8, 4U) servers, rackmounted, 4x1200W power supplies, 1x4U rack blank |
Nodes | 3xSL250s(gen8), 3x2xXeon E5-2650 2.0 Ghz 20MB Cache 8 cores (total 16 cores/node)), Romley series |
3x16x8GB 240-Pin DDR3 1600 MHz (64gb/node, 10+ gb/gpu, max 256gb) | |
3x2x500GB 7200RPM, 3x6xNVIDIA Tesla K20 5 GB GPUs (5 gpu/node), 1CPU-to-3GPU ratio | |
3x2x10/100/1000 NIC, Dedicated IPMI Port, 3x8x PCIE 3.0 x16 Slots (GPU), 3x2x PCIE 3.0 x8 | |
3x2xIB interconnect, QDR 40Gb/s, FlexibleLOM goes into PCI3x8 slot | |
chassis supplied power; 3x1x one PDU power cord (416151-B21)? - see below | |
Network | 1xVoltaire QDR 36-port infiniband 40 Gb/s switch, + 6x 5M QSFP IB cables |
No ethernet switch, 17x 7' CAT5 RJ45 cables | |
Power | rack PDU ready, what is 1x HP 40A HV Core Only Corded PDU??? |
Software | RHEL, CMU GPU enabled (1 year support) - not on quote??? |
Warranty | 3 Year Parts and Labor (HP technical support?) |
GPU Teraflops | 21.06 double, 63.36 single |
Quote | <!-- $128,370, for a 1x6500+2xSl250 setup estimate is $95,170 --> Arrived (S&H and insurance?) |
AX Specs
http://www.amax.com/hpc/productdetail.asp?product_id=simcluster Fremont, CA
Topic | Description |
---|---|
General | 8 CPUs (48 cores), 12 GPUs (30,000 cuda cores), 64 gb ram/node, plus head node |
Head Node | 1x1U Rackmount System, 2x Intel Xeon E5-2620 2.0GHz (12 cores total) |
64GB DDR3 1333MHz (max 256gb), 2×10/100/1000 NIC, 2x PCIe x16 Full | |
2x1TB (Raid 1) 7200RPM, InfiniBand adapter card, Single-Port, QSFP 40Gb/s | |
???w Power Supply, CentOS | |
Nodes | 4x1U, 4x2xIntel Xeon E5-2650 2.0GHz, with 6 cores (12cores/node) Romley series |
4x96GB 240-Pin DDR3 1600 MHz (96gb/node memory, 8gb/gpu, max 256gb) | |
4x1TB 7200RPM, 12xNVIDIA Tesla K20 8 GB GPUs (3/node), 1CPU-1.5GPU ratio | |
2×10/100/1000 NIC, Dedicated IPMI Port, 4x PCIE 3.0 x16 Slots | |
4xInfiniband adapter card, Single-Port, QSFP 40Gb/s | |
4x??00W Redundant Power Supplies | |
Network | 1x Infiniband Switch (18 ports)& HCAs (single port) + ?' cables |
1x 1U 24 Port Rackmount Switch, 10/100/1000, Unmanaged (+ ?' cables) | |
Power | there are 3 rack PDUs? What are the connectors, L6-30? |
Software | CUDA only |
Warranty | 3 Year Parts and Labor (AX technical support?) |
GPU Teraflops | 14.04 double, 42.96 single |
Quote | <!-- $73,965 (S&H $800 included) --> Arrived |
MW Specs
http://www.microway.com/tesla/clusters.html Plymouth, MA
Topic | Description |
---|---|
General | 8 CPUs (64 cores), 16 GPUs (40,000 cuda cores), 32 gb ram/node, plus head node |
Head Node | 1x2U Rackmount System, 2xXeon E5-2650 2.0 Ghz 20MB Cache 8 cores |
8x4GB 240-Pin DDR3 1600 MHz ECC (max 512gb), 2×10/100/1000 NIC, 3x PCIe x16 Full, 3x PCIe x8 | |
2x1TB 7200RPM (Raid 1) + 6x2TB (Raid 6), Areca Raid Controller | |
Low profile graphics card, ConnectX-3 VPI adapter card, Single-Port, FDR 56Gb/s | |
740w Power Supply 1+1 redundant | |
Nodes | 4x1U Rackmountable Chassis, 4×2 Xeon E5-2650 2.0 Ghz 20MB Cache 8 cores (16/node), Sandy Bridge series |
4x8x4GB 240-Pin DDR3 1600 MHz (32gb/node memory, 8gb/gpu, max 256gb) | |
4x1x120GB SSD 7200RPM, 4x4xNVIDIA Tesla K20 5 GB GPUs (4/node), 1CPU-2GPU ratio | |
2×10/100/1000 NIC, Dedicated IPMI Port, 4x PCIE 3.0 x16 Slots | |
4xConnectX-3 VPI adapter card, Single-Port, FDR 56Gb/s | |
4x1800W (non) Redundant Power Supplies | |
Network | 1x Mellanox InfiniBand FDR Switch (36 ports)& HCAs (single port) + 3m cable FDR to existing Voltaire switch |
1x 1U 48 Port Rackmount Switch, 10/100/1000, Unmanaged (cables) | |
Rack | |
Power | 2xPDU, Basic rack, 30A, 208V, Requires 1x L6-30 Power Outlet Per PDU (NEMA L6-30P) |
Software | CentOS, Bright Cluster Management (1 year support), MVAPich, OpenMPI, CUDA 5 |
scheduler and gnu compilers installed and configured | |
Amber12, Lammps, Barracuda (for weirlab?), and others if desired …bought through MW | |
Warranty | 3 Year Parts and Labor (lifetime technical support) |
GPU Teraflops | 18.72 double, 56.32 single |
Quote | <!-- estimated at $95,800 --> Arrived, includes S&H and Insurance |
Upgrades | Cluster pre-installation service |
5×2 E5-2660 2.20 Ghz 8 core CPUs | |
5x upgrade to 64 GB per node |
Then