This is an old revision of the document!
This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old “brief description” page and the “queue description” page.
The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain wesleyan.edu behind VPN for off campus access)
cottontail
(Supermicro 4U), Openlava scheduler and snapshot enginecottontail2
(HP Proliant G380 2U), backup scheduler, standby for /sanscratchswallowtail
(Dell PowerEdge 2950 2U), backup scheduler, databasespetaltail
(Dell PowerEdge 2950 2U), test box (may crash), Warewulf provisioninggreentail
(HP Proliant G380 2U), /sanscratch primary NFS serverwhitetail
(Angstrom Blade 1U), Hadoop Cloudera test serversharptail
(Supermicro 4U), /home primary NFS serverhomedr
(Supermicro 2U), disaster recovery for /home, off siterstore0
and rstore2
(Supermicro 4U), NFS mounts and Samba sharesSeveral types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org:
Compute node categories which usually align with queues:
All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24 and tinymem queues). Our total job slot count is roughly 1,040. Our total teraflops compute capacity is about 22 cpu side, 23 gpu side. Our total memory footprint is about 100 GB gpu side, 4,976 GB cpu side (exclude queue bss24).
Home directory file system are provided (via NFS or IPoIB) by the node “sharptail” (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node “greentail” makes available 33 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The Openlava scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk snapshots from node “sharptail” to node “cottontail” disk arrays.
A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. The namenode and login node is whitetail
and also contains the scheduler for Hadoop. It is based on Cloudera CD3U6 repository.
Commercial software has their own queue limited by available licenses (no need to check out licenses). Jobs are processed on the nodes of hp12, mw256, and mw256fd queues. That can change if we need to.
Queue | Nr Of Nodes | Total GB Mem Per Node | Total Cores In Queue | Switch | Hosts | Notes |
---|---|---|---|---|---|---|
matlab | na | na | na | QDR infiniband | any host | 8/16 licenses |
stata | na | na | na | QDR infiniband | any host | 6 licenses |
mathematica | na | na | na | QDR infiniband | any host | unlimited licenses |
Queue | Nr Of Nodes | Total GB Mem Per Node | Total Cores In Queue | Switch | Hosts | Notes |
---|---|---|---|---|---|---|
hp12 | 32 | 12 | 256 | QDR infiniband | n1-n32 | CPU |
bss24 | 42 | 24 | 84 | gigabit ethernet | b1-b49 | CPU |
mw256 | 5 | 256 | 140 | QDR infiniband | n33-n37 | CPU |
mwgpu | 5 | 256 | 20 | QDR infiniband | n33-n37 | GPU & CPU |
mw256fd | 8 | 256 | 256 | QDR infiniband | n38-n45 | CPU |
tinymem | 14 | 32 | 560 | gigabit ethernet | n39-n59 | CPU |
Some guidelines for appropriate queue usage with detailed page links:
There is no wall time limits in our HPCC except for queue test
. You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.
Home directory policy and Rstore storage options HomeDir and Storage Options
Checkpointing is supported in all queues, consult BLCR page
For a list of software installed consult Software List page
For HPCC acknowledgements consult Acknowledgement page
Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at /home/hmeij/jobs/
From off-campus you need to VPN in first at http://webvpn.wesleyan.edu
This page Dell Racks Power Off may also some helpful hints