User Tools

Site Tools


cluster:126

This is an old revision of the document!



Back

Brief Guide to HPCC

This page will be maintained and provide information to get users started using the compute cluster. It is a merger of the old “brief description” page and the “queue description” page.

Description

The High Performance Compute Cluster (HPCC) is comprised of several login nodes (all are on our domain wesleyan.edu behind VPN for off campus access)

  • primary login node cottontail (Supermicro 4U), Openlava scheduler and snapshot engine
  • secondary login node cottontail2 (HP Proliant G380 2U), backup scheduler, standby for /sanscratch
  • secondary login node swallowtail (Dell PowerEdge 2950 2U), backup scheduler, databases
  • (old login node) petaltail (Dell PowerEdge 2950 2U), test box (may crash), Warewulf provisioning
  • (old login node) greentail (HP Proliant G380 2U), /sanscratch primary NFS server
  • (old node) whitetail (Angstrom Blade 1U), Hadoop Cloudera test server
  • (not to be used as) login node sharptail (Supermicro 4U), /home primary NFS server
  • (to be installed summer 2017) homedr (Supermicro 2U), disaster recovery for /home, off site
  • Storage servers rstore0 and rstore2 (Supermicro 4U), NFS mounts and Samba shares

Several types of compute nodes are available via the OpenLava scheduler, http://www.openlava.org:

  • All are running CentOS6.8, x86_64, Intel Xeon chips (except the Angstrom blades which are AMD Opteron)
  • All are on private networks (no internet)
  • All mount /home (10TB, to be expanded to 25TB fall 2017) and /sanscratch (33TB)
  • All have local disks providing varying amounts of /localscratch (usually Raid0, no backup!)
  • Hyperthreading is on but only on newer hardware are 50% of logical cores allocated.

Compute node categories which usually align with queues:

  • 32 nodes with dual quad core chips (Xeon 5620, 2.4 Ghz) in HP blade 4U enclosures (SL2x170z G6) with memory footprint of 12 GB each (384 GB). This cluster has a compute capacity of 1.5 teraflops (measured using Linpack). Known as the HP cluster, or the nodes n1-n32, queue hp12, 256 job slots.
  • 42 nodes with dual single core chips (AMD Opteron Model 250, 2.4 Ghz) in Angstrom blade 12U enclosures with a memory footprint of 24 GB each (1,008 GB). This cluster has a compute capacity of 0.2-0.3 teraflops (estimated). Known as the Blue Sky Studio cluster, or the b-nodes (b0-b51), queue bss24, 84 job slots. Powered off when not in use.
  • 5 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in ASUS 2U rack servers with a memory footprint of 256 GB each (1,280 GB). Nodes also contain four K20 Telsa GPU each, 2,500 cores/gpu (10,000 gpu cores per node) with GPU memory footprint of 5 GB (20 GB). This cluster has a compute capacity of 23.40 teraflops double or 70.40 teraflops single precision on GPU side and 2.9 teraflops on cpu side. Known as the Microway GPU cluster, or the nodes n33-n37, queues mwgpu (40 job slots, 2:1 ration cpu:gpu) and mw256 (80 job slots).
  • 8 nodes with dual eight core chips (Xeon E5-2660, 2.2 Ghz) in Supermicro 1U rack servers with a memory footprint of 256 GB each (2,048 GB). This cluster has a compute capacity of 5.3 teraflops (estimated). Known as the Microway CPU cluster, or nodes n38-n45, queue mw256fd, 192 job slots.
  • 14 nodes with dual ten core chips (Xeon E5-2550 v3, 2.3 Ghz) in Supermicro 1U rack servers with a memory footprint of 32 GB each (256 GB). This cluster has a compute capacity of 12 teraflops (estimated). Known as the Microway tinymem cluster, or n46-n59, queue tinymem, 448 job slots.

All queues are available for job submissions via all login nodes. All nodes on Infiniband switches for parallel computational jobs (excludes bss24 and tinymem queues). Our total job slot count is roughly 1,040. Our total teraflops compute capacity is about 22 cpu side, 23 gpu side. Our total memory footprint is about 100 GB gpu side, 4,976 GB cpu side (exclude queue bss24).

Home directory file system are provided (via NFS or IPoIB) by the node “sharptail” (our file server) from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users. Node “greentail” makes available 33 TB of scratch space at /sanscratch via NFS. In addition all nodes provide local scratch space at /localscratch (excludes queue tinymem). The Openlava scheduler automatically makes directories in both these scratch areas for each job (named after JOBPID). Backup services for /home are provided via disk-to-disk snapshots from node “sharptail” to node “cottontail” disk arrays.

A subset of 25 nodes of the Blue Sky Studio cluster listed above also runs our test Hadoop cluster. The namenode and login node is whitetail and also contains the scheduler for Hadoop. It is based on Cloudera CD3U6 repository.

Our Queues

Commercial software has their own queue limited by available licenses (no need to check out licenses). Jobs are processed on the nodes of hp12, mw256, and mw256fd queues. That can change if we need to.

QueueNr Of NodesTotal GB Mem Per NodeTotal Cores In QueueSwitchHostsNotes
matlab na na na QDR infiniband any host 8/16 licenses
stata na na na QDR infiniband any host 6 licenses
mathematica na na na QDR infiniband any host unlimited licenses
QueueNr Of NodesTotal GB Mem Per NodeTotal Cores In QueueSwitchHostsNotes
hp12 32 12 256 QDR infiniband n1-n32 CPU
bss24 42 24 84 gigabit ethernet b1-b49 CPU
mw256 5 256 140 QDR infiniband n33-n37 CPU
mwgpu 5 256 20 QDR infiniband n33-n37 GPU & CPU
mw256fd 8 256 256 QDR infiniband n38-n45 CPU
tinymem 14 32 560 gigabit ethernet n39-n59 CPU

Some guidelines for appropriate queue usage with detailed page links:

  • hp12 is the default queue
    • for processing lots of small memory footprint jobs
  • bss24, primarily used by bioinformatics group, available to all if needed
    • when not in use shut down, email me (hmeij@wes) or PEND jobs (hpcadmin will get notified)
    • also our Hadoop cluster (access via head node whitetail) Use Hadoop Cluster
  • mw256 are for jobs requiring large memory access (up to 24 jobs slots per node)
  • mwgpu is for GPU enabled software primarily (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)
    • be sure to reserve one or more job slot for each GPU used Submitting GPU Jobs
    • be sure to use the correct wrapper script for mpirun from mvapich2
  • mw256fd are for jobs requiring large memory access (up to 24 jobs slots per node)
    • or requiring lots of threads (job slots) confined to single node (Gaussian, Autodeck)
    • or requiring access to /localscratch on these node which is 175 GB on a 15K disk.
    • /localscratch5tb, unique on these nodes, is a Raid 0 file system of 3 disks providing 5 TB local scratch
      • stage temporary data in /localscratch5tb/username/ and it will not be removed
  • tinymem are for small serial jobs with small memory requirements
    • has a sataDOM (non spinning 16G device on motherboard) for operating system
    • do not use /localscratch on these nodes
  • test (swallowtail, petaltail, greentail)
    • wall time of 8 hours of CPU usage

There is no wall time limits in our HPCC except for queue test. You are responsible for checkpointing though. Consult these pages, all nodes in all queues are BLCR enabled. Logins nodes and storage nodes are on UPS but all compute nodes are on utility power. Crashes do happen, be prepared to restart your long running jobs.

Other Stuff

Home directory policy and Rstore storage options HomeDir and Storage Options

Checkpointing is supported in all queues, how it works BLCR page

For a list of software installed consult Software List page

Details on all scratch spaces consult Scratch Spaces page

For HPCC acknowledgements consult Acknowledgement page

Sample scripts for job submissions (serial, array, parallel, forked and gpu) can be found at /home/hmeij/jobs/

From off-campus you need to VPN in first at http://webvpn.wesleyan.edu


Back

cluster/126.1496327221.txt.gz · Last modified: 2017/06/01 10:27 by hmeij07