This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:126 [2017/06/01 19:46] hmeij07 [Our Queues] |
cluster:126 [2017/09/05 14:41] hmeij07 |
||
---|---|---|---|
Line 17: | Line 17: | ||
* (old node) '' | * (old node) '' | ||
* (not to be used as) login node '' | * (not to be used as) login node '' | ||
- | * (to be installed | + | * (to be populated |
* Storage servers '' | * Storage servers '' | ||
Several types of compute nodes are available via the OpenLava scheduler, http:// | Several types of compute nodes are available via the OpenLava scheduler, http:// | ||
- | * All are running CentOS6.8, x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron) | + | * All are running CentOS6.[4-9], x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron) |
* All are on private networks (no internet) | * All are on private networks (no internet) | ||
* All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | * All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB) | ||
Line 34: | Line 34: | ||
* 42 nodes with dual single core chips (AMD Opteron Model 250, 2.4 Ghz) in Angstrom blade 12U enclosures with a memory footprint of 24 GB each (1,008 GB). This cluster has a compute capacity of 0.2-0.3 teraflops (estimated). Known as the Blue Sky Studio cluster, or the b-nodes (b0-b51), queue bss24, 84 job slots. Powered off when not in use. | * 42 nodes with dual single core chips (AMD Opteron Model 250, 2.4 Ghz) in Angstrom blade 12U enclosures with a memory footprint of 24 GB each (1,008 GB). This cluster has a compute capacity of 0.2-0.3 teraflops (estimated). Known as the Blue Sky Studio cluster, or the b-nodes (b0-b51), queue bss24, 84 job slots. Powered off when not in use. | ||
- | * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, | + | * 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double |
* 8 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 256 GB each (2,048 GB). This cluster has a compute capacity of 5.3 teraflops (estimated). Known as the Microway CPU cluster, or nodes n38-n45, queue mw256fd, 192 job slots. | * 8 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 256 GB each (2,048 GB). This cluster has a compute capacity of 5.3 teraflops (estimated). Known as the Microway CPU cluster, or nodes n38-n45, queue mw256fd, 192 job slots. | ||
Line 40: | Line 40: | ||
* 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots. | * 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots. | ||
- | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24 and tinymem queues). | + | * 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway " |
+ | |||
+ | All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24, tinymem | ||
Home directory file system are provided (via NFS or IPoIB) by the node '' | Home directory file system are provided (via NFS or IPoIB) by the node '' | ||
Line 66: | Line 68: | ||
| mw256fd | | mw256fd | ||
| tinymem | | tinymem | ||
+ | | mw128 | | ||
Some guidelines for appropriate queue usage with detailed page links: | Some guidelines for appropriate queue usage with detailed page links: | ||
Line 89: | Line 92: | ||
* nodes have a sataDOM (non spinning 16G USB device on motherboard) for operating system | * nodes have a sataDOM (non spinning 16G USB device on motherboard) for operating system | ||
* do not use / | * do not use / | ||
+ | * mw128 (bought with faculty startup funds) tailored for Gaussian jobs | ||
+ | * About 2TB / | ||
+ | * Priority access for Carlos' | ||
* test (swallowtail, | * test (swallowtail, | ||
* wall time of 8 hours of CPU usage | * wall time of 8 hours of CPU usage |