User Tools

Site Tools


cluster:126

This is an old revision of the document!



Back

Brief Guide to HPCC

This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old “brief description” page and the “queue description” page.

Description

The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain wesleyan.edu behind VPN for off campus access)

  • primary login node cottontail (Supermicro 4U), Openlava scheduler and snapshot engine for /home
  • secondary login node cottontail2 (HP Proliant G380 2U), backup scheduler
  • secondary login node swallowtail (Dell PowerEdge 2950 2U), backup scheduler, databases
  • sandbox petaltail (Dell PowerEdge 2950 2U), test box, Warewulf provisioning CentOS6
  • NFS server greentail52 (SuperMicro 36+2, 2U), /sanscratch
  • (only log in when moving conternt) file server node sharptail (Supermicro 4U), /home NFS server
  • DR node sharptail2 (Supermicro 2U), disaster recovery for /home, off site (active users only)
  • storage servers rstore0 and rstore2 (Supermicro 4U), NFS mounts and Samba shares (2x 120T)
  • storage servers rstore4 and rstore6 (Supermicro 4U), NFS mounts and Samba shares (2x 220T)
  • mindstore storage servers mstore0/mstore1 (Supermicro 4U), available on HPC (2x 110T)

Several types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org:

  • All are running CentOS6.10 or CentOS7.7
  • All are x86_64, Intel Xeon chips from 2006 onwards
  • All are on private networks (192.168.x.x or 10.10.x.x, no internet)
  • All mount /home (10TB, to be replaced by FreeNAS/ZFS 190T appliance 2020) and /sanscratch (xfs,55TB)
  • All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)
  • Hyperthreading is on but only 50% of logical cores allocated via scheduler

Compute node categories which usually align with queues:

  • 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots.
  • 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double precision or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, mwgpu (120 job slots). Old queue mw256 merged in.
  • 8 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 256 GB each (2,048 GB). This cluster has a compute capacity of 5.3 teraflops (estimated). Known as the Microway CPU cluster, or nodes n38-n45, queue mw256fd, 192 job slots.
  • 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots.
  • 18 nodes with dual twelve core chips (Xeon E5-2650 v4, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 128 GB each (2,304 GB). This cluster has a compute capacity of 14.3 teraflops (estimated). Known as the Microway “Carlos” CPU cluster, or nodes n60-n77, queue mw128, 648 job slots.
  • 1 node with dual eight core chips (Xeon E5-2620 v4, 2.10 Ghz) in Supermicro 1U rack server with a memory footprint of 128 GB. This node has four GTX1080i (32 GB memory footprint) gpus. Known as the “amber128” queue, 24 job slots.
  • 12 nodes with dual twelve core chips (Xeon Silver 4214, 2.20 Ghz) in ASUS ESC4000G4 2U rack servers with a memory footprint of 96 GB ( 1,152 GB, about 20 teraflops dpfp). These node each have four RTX1080S (32 GB memory footprint) gpus providing 702 teraflops (mixed mode). Known as the “rtx2080” rack, nodes n79-n90, queue exx96, 432 job slots.

All queues are available for job submissions via all login nodes. Some nodes on Infiniband switches for parallel computational jobs (queues: me256fd, hp12). Our total job slot count is roughly 2,144 with our physical core count 1,480. Our total teraflops compute capacity is about 58 cpu side, 25 gpu side (double precision floating point) and 702 gpu side (mixed mode). Our total memory footprint is about 528 GB gpu side, 8,532 GB cpu side.

Home directory file system are provided (via NFS or IPoIB) by the node sharptail (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node greentail52 makes available 55 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk point-in-time snapshots from node sharptail to node cottontail disk arrays. (daily, weekly, monthly snapshots are mounted read only on cottontail for self-serve content retrievals). Some faculty have their home directories on node ringtail which provides 33 TB via /home33. Some faculty also have their own storage (2x 110 TB via /mindstore). In addition no-quota, no-backup user directories can be requested in /homeextra1 (7 T) or /homeextra2 (5 T). All home directories will migrate to a FreeNAS/ZFS appliance named hpcstore in 2020 (190T usable, scalable to 1.2P).

Two (old) Rstore storage servers each provide about 104 TB of usable backup space which is not mounted on the compute nodes. Each Rstore server's content is replicated to a dedicated passive standby server of same size, located in same data center but in different racks. As of Spring 2019 we have added two new Rstore servers of 220 T each, fully backed up with replication.

Our Queues

Commercial software has their own queue limited by available licenses. There are no scheduler license resources, just queue jobs up in appropriate queue. Jobs are processed on the nodes of hp12, mwgpu mw256, and mw256fd queues. That can change if we need to.

QueueNr Of NodesTotal GB Mem Per NodeTotal Cores In QueueSwitchHostsNotes
stata na na na QDR infiniband any host 6 licenses

Note: Matlab and Mathematica now have “unlimited licenses”.

QueueNr Of NodesTotal GB Mem Per NodeJob SLots In QueueSwitchHostsNotes
hp12 32 12 256 QDR infiniband n1-n32 CPU
mwgpu 5 256 120 QDR infiniband n33-n37 GPU & CPU
mw256fd 8 256 192 QDR infiniband n38-n45 CPU
tinymem 14 32 448 gigabit ethernet n39-n59 CPU
mw128 18 128 648 gigabit ethernet n60-n77 CPU
amber128 1 128 24 gigabit ethernet n78 GPCU & CPU

Some guidelines for appropriate queue usage with detailed page links:

  • hp12 is the default queue
    • for processing lots of small to medium memory footprint jobs
  • mw256 are for jobs requiring large memory access (up to 24 jobs slots per node)
    • for exclusive use of a node reserve all memory
  • mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)
    • be sure to reserve one or more job slot for each GPU used Submitting GPU Jobs
    • be sure to use the correct wrapper script to set up mpirun from mvapich2
  • mw256fd are for jobs requiring large memory access (up to 24 jobs slots per node)
    • or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)
    • or requiring access to fast /localscratch which is 175 GB on a 15K disk.
    • or requiring larger /localscratch5tb, which is a Raid 0 file system of 5 TB
      • stage temporary data in /localscratch5tb/username/ and it will not be removed
  • tinymem are for small serial jobs with small memory requirements
    • nodes have a sataDOM (non spinning 16G USB device on motherboard) for operating system
    • do not use /localscratch on these nodes, no no
  • mw128 (bought with faculty startup funds) tailored for Gaussian jobs
    • About 2TB /localscratch (Raid 10) on each node
    • Priority access for Carlos' group till summer 2020
  • amber128 (donated hardware) tailored for Amber16 jobs
    • Be sure to use mpich3 for Amber
    • Priority access for Amber jobs
  • test (swallowtail, petaltail, cottontail2)
    • wall time of 8 hours of CPU usage

There are no wall time limits in our HPCC environment except for queue test. You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.

Other Stuff

Home directory policy and Rstore storage options HomeDir and Storage Options

Checkpointing is supported in all queues, how it works DMTCP page

For a list of software installed consult Software List page

Details on all scratch spaces consult Scratch Spaces page

For HPCC acknowledgements consult Acknowledgement page

Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at /home/hmeij/jobs/

From off-campus you need to VPN in first at http://webvpn.wesleyan.edu


Back

cluster/126.1582813588.txt.gz · Last modified: 2020/02/27 09:26 by hmeij07