Table of Contents


Back

Brief Guide to HPCC

This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old “brief description” page and the “queue description” page.

Description

The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our internal network (vlan 52) wesleyan.edu so VPN is required for off campus access as well as for students on campus)

Several types of compute nodes are available via the scheduler:

Compute node categories which usually align with queues:

All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256). Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 560 GB gpu side, 10,452 GB cpu side.

Home directory file system are provided (via NFS or IPoIB) by the node hpcstore (our file server) from a direct attached disk array. In total, 235 TB of /zfshomes disk space is accessible to the users. Node greentail52 makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /zfshomes are provided via disk-to-disk replication from node hpcstore to node sharptail disk arrays. The TrueNAs/ZFS appliance performs daily snapshots with a retention window of 180 days. Some faculty&students have their home directories on node ringtail which provides 33 TB via /home33. Some faculty&students have their home directories on node ringtail2 which provides 66 TB via /home33. Some faculty&students also have their own storage (2x 110 TB via /mindstore). Static content should be migrated to the Rstore platform.

Our Queues

There are no scheduler commercial software license resources. Only stata has a limited 6 user license.

QueueNr Of NodesTotal GB Mem Per NodeTotal Cores In QueueSwitchHostsNotes
stata na na na QDR Infiniband any host 6 licenses

Note: Matlab and Mathematica now have “unlimited licenses”.

QueueNr Of NodesTotal GB Mem Per NodeJob SLots In QueueSwitchHostsNotes
hp12 32 12 256 gigabit ethernet n1-n32 CPU
mwgpu 5 256 120 QDR infiniband n33-n37 GPU & CPU
mw256fd 8 256 192 QDR infiniband n38-n45 CPU
tinymem 14 32 448 gigabit ethernet n39-n59 CPU
mw128 18 128 648 gigabit ethernet n60-n77 CPU
amber128 1 128 24 gigabit ethernet n78 GPU & CPU
exx96 12 96 432 gigabit ethernet n79-n90 GPU & CPU
test 2 192 96 gigabit ethernet n100-n101 GPU & CPU
mw256 6 256 672 EDR infiniband n102-n102 CPU

Some guidelines for appropriate queue usage with detailed page links:

NOTE: we are migrating from Openlava to Slurm during summer 2022. All queues except hp12 and mw256fd will be service by cottontail2 Slurm scheduler.

There are no wall time limits in our HPCC environment except for queue test. You are responsible for checkpointing though. Consult these pages, all nodes in all queues are DMTCP enabled (read DMTCP. Login nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.

Other Stuff

Home directory policy and Rstore storage options HomeDir and Storage Options

Checkpointing is supported in all queues, how it works DMTCP page

For a list of software installed consult Software List page, endless…

For a list of OpenHPC software installed consult Software List page

Details on all scratch spaces consult Scratch Spaces page

For HPCC acknowledgements consult Acknowledgement page

Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at /zfshomes/hmeij/jobs/ and /zfshomes/hmeij/k20redo and /zfshomes/hmeij/slurm

From off-campus you need to VPN in first, download GlobalProtect client at http://vpn.wesleyan.edu


Back