This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:120 [2013/09/10 18:40] hmeij |
cluster:120 [2014/02/21 15:24] (current) hmeij |
||
---|---|---|---|
Line 2: | Line 2: | ||
**[[cluster: | **[[cluster: | ||
- | | + | This outdated page replace by [[cluster: |
+ | --- // | ||
+ | |||
+ | Updated | ||
+ | --- // | ||
+ | |||
+ | | ||
==== New Configuration ==== | ==== New Configuration ==== | ||
Line 8: | Line 14: | ||
The Academic High Performance Compute Cluster is comprised of two login nodes (greentail and swallowtail, | The Academic High Performance Compute Cluster is comprised of two login nodes (greentail and swallowtail, | ||
- | Three types of compute | + | Several |
- | * 36 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects. | + | * 32 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects. |
- | * 32 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB. 256 job slots. Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, | + | * 30 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB. 256 job slots. Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, |
- | * 25 nodes with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24 GB. 50 job slots. Total memory footprint of the cluster is 600 GB. This cluster has an estimated capacity of 250-350 gigaflops. (can grow to 45 nodes, | + | * 25 nodes with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24 GB each. 50 job slots. Total memory footprint of the cluster is 600 GB. This cluster has an estimated capacity of 250-350 gigaflops. (can grow to 45 nodes, |
- | * 5 nodes with dual eight core E5-2660 Intel Xeon sockets (2.2 Ghz) in ASUS Supermicro rack servers with a memory footprint of 256 GB each (1.28 TB). Hyperthreading is turned on doubling the core count to 32/node (120 job slots for regular HPC). The nodes also contain 4 GPUs per node (total of 20) with 20 reserved CPU cores (job slots). | + | * 5 nodes with dual eight core E5-2660 Intel Xeon sockets (2.2 Ghz) in ASUS/Supermicro rack servers with a memory footprint of 256 GB each (1.28 TB). Hyperthreading is turned on doubling the core count to 32/node (120 job slots for regular HPC). The nodes also contain 4 GPUs per node (total of 20) with 20 reserved CPU cores (job slots). |
All queues are available for job submissions via the login nodes greentail and swallowtail; | All queues are available for job submissions via the login nodes greentail and swallowtail; | ||
- | Home directory file system are provided (via NFS or IPoIB) by the login node " | + | Home directory file system are provided (via NFS or IPoIB) by the login node " |
The 25 nodes clusters listed above also runs our Hadoop cluster. | The 25 nodes clusters listed above also runs our Hadoop cluster. |