This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
cluster:120 [2013/08/28 15:49] hmeij |
cluster:120 [2013/09/10 14:40] hmeij |
||
---|---|---|---|
Line 12: | Line 12: | ||
* 36 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects. | * 36 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects. | ||
* 32 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB. 256 job slots. Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, | * 32 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB. 256 job slots. Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, | ||
- | * 45 node with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24 GB. | + | * 25 nodes with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24 GB. |
+ | * 5 nodes with dual eight core E5-2660 Intel Xeon sockets (2.2 Ghz) in ASUS Supermicro rack servers with a memory footprint of 256 GB each (1.28 TB). Hyperthreading is turned on doubling the core count to 32/node (120 job slots for regular HPC). The nodes also contain 4 GPUs per node (total of 20) with 20 reserved CPU cores (job slots). | ||
- | All queues are available for job submissions via the login nodes greentail and swallowtail; | + | All queues are available for job submissions via the login nodes greentail and swallowtail; |
- | + | ||
- | Home directory file system are provided (via NFS or IPoIB) by the login node " | + | |
+ | Home directory file system are provided (via NFS or IPoIB) by the login node " | ||
+ | The 25 nodes clusters listed above also runs our Hadoop cluster. | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |