This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old “brief description” page and the “queue description” page.
HPCC maintains and regularly updates an extensive software stack. Including provisioning tools, resource management, file transfer clients, development tools, a variety of scientific libraries, a variety of compilers (e.g. gcc/g++, OneAPI) and communication libraries (e.g., OpenMPI). Primarily provided by OpenHPC (https://openhpc.community/). Many opensource applications are custom compiled (about 100 or so used by many academic disciplines). The HPCC website offers documentation to help users resolve technical issues they may encounter (https://dokuwiki.wesleyan.edu/doku.php?id=cluster:0). Additional technical support (and tutors) is provided by the Scientific Computing and Informatics Center (https://www.wesleyan.edu/scic/) and the Quantitative Analysis Center (https://www.wesleyan.edu/qac/).
ITS funds the system administrative support (0.75 FTE) of one ITS employee. Power and cooling are funded and maintained by Physical Plant. On an annual basis Academic Affairs contributes $25K and the HPCC users contribute $15K. This sets up a 4 year refresh cycle of $160K. On an annual basis Finance contributes up to $10K for maintenance (failed disks etc, monies do not roll over, use it or loose it).
All HPCC hardware is located inside Wesleyan's ITS data center. Physical access is limited by swipe cards to certain ITS personnel. All head/login nodes (2 to 4 or so) are located on an internal subnet protected by Wesleyan's enterprise wide firewall. VPN is required from off campus. The internal HPCC network consists of two private subnets for the 100 or so compute nodes (one for job scheduler and monitoring tools, one for data transfers and NFS mounts). Access to HPCC resources is based on local Linux accounts and groups.
The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) wesleyan.edu so VPN is required for off campus access as well as for students on campus)
cottontail (Supermicro 4U), old scheduler Openlava, CentOS6cottontail2 (Supermicro 1U), new Slurm scheduler, Rocky8, Warewulf, OpenHPChpcmon (supermicro 1U), CentOS8petaltal, swallowtail (HP blades, Rocky8)greentail522(Suoermicro 36+2) serving out /sanscratch, CentOS7, sandboxsharptail2 (Supermicro 2U), CentOS8, OpenHPCrstore0 and rstore1 (Supermicro 4U), replicated, Samba shares (2x 440T)rstore2 and rstore3 (Supermicro 4U), replicated, Samba shares (2x 440T)mstore0/mstore1 (Supermicro 4U), replicated, mounted on all HPC nodes (2x 110T)M40HA TrueNAS, storage appliance for home directories, /zfshomes (500T)X20HA TrueNAS, replication target for M40HA (300T)Several types of compute nodes are available via the scheduler:
Compute node categories which usually align with queues:
test queue, nodes n100-n101.mw256 queue nodes n102-n107. Storage server “astrostore” serves the nodes NFSoRDMA (EDR Infiniband). About 164 TB.mwgpu256, nodes 108-n117.All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). Our total job slot count is roughly 2,672 with our physical core count 1,336. Our total teraflops compute capacity is about 92 cpu side and 2,902 gpu side (mixed mode). Our total memory footprint is about 1,584 GB gpu side and 13,524 GB cpu side.
Home directory file system are provided (via NFS or IPoIB) by the node M40ha (our file server) from a direct attached disk array. In total, 500 TB of /zfshomes disk space is accessible to the users. Node greentail52 makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via replication to an older X20HA TrueNAS/ZFS appliance. The M40HA TrueNAS/ZFS appliance performs daily snapshots with a retention window of 180 days. Some faculty&students have their home directories on node ringtail which provides 33 TB via /home33. Some faculty&students have their home directories on node ringtail2 which provides 66 TB via /home66. Some faculty&students also have their own storage (2x 110 TB via /mindstore). Static content should be migrated to the Rstore platform.
There are no scheduler commercial software license resources. Only stata has a limited 6 user license. Matlab and Mathematica now have “unlimited licenses”.
| Queue | Nr Of Nodes | Total GB Mem Per Node | Job SLots In Queue | Switch | Hosts | Notes |
|---|---|---|---|---|---|---|
| hp12 | 32 | 12 | 256 | gigabit ethernet | n1-n32 | CPU |
| mwgpu | 5 | 256 | 120 | QDR infiniband | n33-n37 | GPU & CPU |
| mw256fd | 8 | 256 | 192 | QDR infiniband | n38-n45 | CPU |
| tinymem | 14 | 32 | 448 | gigabit ethernet | n39-n59 | CPU |
| mw128 | 18 | 128 | 648 | gigabit ethernet | n60-n77 | CPU |
| amber128 | 1 | 128 | 24 | gigabit ethernet | n78 | GPU & CPU |
| exx96 | 12 | 96 | 432 | gigabit ethernet | n79-n90 | GPU & CPU |
| test | 2 | 192 | 96 | gigabit ethernet | n100-n101 | GPU & CPU |
| mw256 | 6 | 256 | 672 | EDR infiniband | n102-n102 | CPU |
| mwgpu256 | 10 | 256 | 480 | gigabit ethernet | n108-n117 | GPU & CPU |
| exx512 | 1 | 512 | 48 | gigabit ethernet | n91 | GPU & CPU |
Some guidelines for appropriate queue usage with detailed page links:
NOTE: we are migrating from Openlava to Slurm during summer 2022. All queues except hp12 and mw256fd will be service by cottontail2 Slurm scheduler.
There are no wall time limits in our HPCC environment except for queue test. You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read DMTCP. Login nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
Home directory policy and Rstore storage options HomeDir and Storage Options
Checkpointing is supported in all queues, how it works DMTCP page
For a list of software installed consult Software List page, endless…
For a list of OpenHPC software installed consult Software List page
Details on all scratch spaces consult Scratch Spaces page
For HPCC acknowledgements consult Acknowledgement page
Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at /zfshomes/hmeij/jobs/ and /zfshomes/hmeij/k20redo and /zfshomes/hmeij/slurm
From off-campus you need to VPN in first, download GlobalProtect client at http://vpn.wesleyan.edu