This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
cluster:126 [2017/09/05 10:43] hmeij07 |
cluster:126 [2017/12/06 10:40] hmeij07 |
||
---|---|---|---|
Line 15: | Line 15: | ||
* (old login node) '' | * (old login node) '' | ||
* (old login node) '' | * (old login node) '' | ||
- | * (old node) '' | + | * <del>(old node) '' |
* (not to be used as) login node '' | * (not to be used as) login node '' | ||
- | * (to be populated summer 2017) '' | + | * DR node '' |
* Storage servers '' | * Storage servers '' | ||
Line 31: | Line 31: | ||
* 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | * 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots. | ||
- | |||
- | * 42 nodes with dual single core chips (AMD Opteron Model 250, 2.4 Ghz) in Angstrom blade 12U enclosures with a memory footprint of 24 GB each (1,008 GB). This cluster has a compute capacity of 0.2-0.3 teraflops (estimated). Known as the Blue Sky Studio cluster, or the b-nodes (b0-b51), queue bss24, 84 job slots. Powered off when not in use. | ||
* 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). < | * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). < | ||
Line 41: | Line 39: | ||
* 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " | ||
+ | |||
+ | * 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 64 GB. This node has four GTX1080Ti gpus providing | ||
All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem and mw128 queues). | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem and mw128 queues). |