cluster:126
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| cluster:126 [2025/10/07 19:19] – hmeij07 | cluster:126 [2025/11/22 16:08] (current) – [In General] hmeij07 | ||
|---|---|---|---|
| Line 5: | Line 5: | ||
| This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old "brief description" | This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old "brief description" | ||
| + | |||
| + | ==== In General ==== | ||
| + | |||
| + | HPCC maintains and regularly updates an extensive software stack. Including provisioning tools, resource management, file transfer clients, development tools, a variety of scientific libraries, a variety of compilers (e.g. gcc/g++, OneAPI) and communication libraries (e.g., OpenMPI). Primarily provided by OpenHPC (https:// | ||
| + | |||
| + | ITS funds the system administrative support (0.75 FTE) of one ITS employee. Power and cooling are funded and maintained by Physical Plant. On an annual basis Academic Affairs contributes $25K and the HPCC users contribute $15K. This sets up a 4 year refresh cycle of $160K. On an annual basis Finance contributes up to $10K for maintenance (failed disks etc, monies do not roll over, use it or loose it). | ||
| + | |||
| + | All HPCC hardware is located inside Wesleyan' | ||
| ===== Description ===== | ===== Description ===== | ||
| Line 53: | Line 61: | ||
| * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each. Known as the " | * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each. Known as the " | ||
| - | * 1 node with dual twelve core chips (Xeon Silver 4520, 2.40 Ghz) in ASUS ESC4000-E11 2U rack server with a memory footprint of 512 GB ( about 4 teraflops | + | * 1 node with dual twelve core chips (Xeon Silver 4520, 2.40 Ghz) in ASUS ESC4000-E11 2U rack server with a memory footprint of 512 GB ( about 4 teraflops |
| All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). | All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). | ||
cluster/126.1759864767.txt.gz · Last modified: by hmeij07
