This is an old revision of the document!
This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old “brief description” page and the “queue description” page.
The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain wesleyan.edu behind VPN for off campus access)
cottontail
(Supermicro 4U), primary scheduler and snapshot engine for /homecottontail2
(HP Proliant G380 2U), backup schedulerswallowtail
(Dell PowerEdge 2950 2U), backup scheduler, databasespetaltail
(Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6whitetail
(HP Proliant G380 2U), Warewulf OpenHPC provisioning CentOS7hpcmon
(supermicro 1U, centos6)greentail52
(SuperMicro 36+2, 2U), /sanscratch sharptail
(Supermicro 4U), /home NFS serversharptail2
(Supermicro 2U), disaster recovery for /home, off site (active users only)rstore0
and rstore2
(Supermicro 4U), NFS mounts and Samba shares (2x 120T)rstore4
and rstore6
(Supermicro 4U), NFS mounts and Samba shares (2x 220T)mstore0/mstore1
(Supermicro 4U), available on HPC (2x 110T)Several types of compute nodes are available via the scheduler:
Compute node categories which usually align with queues:
All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: me256fd, hp12). Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side, 8,532 GB cpu side.
Home directory file system are provided (via NFS or IPoIB) by the node sharptail
(our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node greentail52
makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk point-in-time snapshots from node sharptail
to node cottontail
disk arrays. (daily, weekly, monthly snapshots are mounted read only on cottontail
for self-serve content retrievals). Some faculty have their home directories on node ringtail
which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore). In addition no-quota, no-backup user directories can be requested in /homeextra1 (7 T) or /homeextra2 (5 T). All home directories will migrate to a FreeNAS/ZFS appliance named hpcstore
in 2020 (190T usable, scalable to 1.2P).
Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racks. As of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replication.
Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Commercial software jobs are processed on the nodes of mw256fd and mw128.
Queue | Nr Of Nodes | Total GB Mem Per Node | Total Cores In Queue | Switch | Hosts | Notes |
---|---|---|---|---|---|---|
stata | na | na | na | QDR Infiniband | any host | 6 licenses |
Note: Matlab and Mathematica now have “unlimited licenses”.
Queue | Nr Of Nodes | Total GB Mem Per Node | Job SLots In Queue | Switch | Hosts | Notes |
---|---|---|---|---|---|---|
hp12 | 32 | 12 | 256 | QDR infiniband | n1-n32 | CPU |
mwgpu | 5 | 256 | 120 | QDR infiniband | n33-n37 | GPU & CPU |
mw256fd | 8 | 256 | 192 | QDR infiniband | n38-n45 | CPU |
tinymem | 14 | 32 | 448 | gigabit ethernet | n39-n59 | CPU |
mw128 | 18 | 128 | 648 | gigabit ethernet | n60-n77 | CPU |
amber128 | 1 | 128 | 24 | gigabit ethernet | n78 | GPU & CPU |
exx96 | 12 | 96 | 432 | gigabit ethernet | n79-n90 | GPU & CPU |
Some guidelines for appropriate queue usage with detailed page links:
There are no wall time limits in our HPCC environment except for queue test
. You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read DMTCP. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
Home directory policy and Rstore storage options HomeDir and Storage Options
Checkpointing is supported in all queues, how it works DMTCP page
For a list of software installed consult Software List page, endless…
Details on all scratch spaces consult Scratch Spaces page
For HPCC acknowledgements consult Acknowledgement page
Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at /home/hmeij/jobs/
From off-campus you need to VPN in first at http://vpn.wesleyan.edu