User Tools

Site Tools


cluster:95

This is an old revision of the document!



Back

Brief Description

For inclusion in proposals. This page is not maintained.

Academic High Performance Computing at Wesleyan

Wesleyan University HPC environment is comprised of three clusters: the “greentail” HP hardware cluster, the “petaltail/swallowtail” Dell hardware cluster and the Angstrom “sharptail” Blue Sky Studios hardware cluster. A brief description of each follows.

The HP cluster consists of one login node (“greentail”), the Lava job scheduler, and 32 compute nodes. Each compute node holds dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB. Total memory footprint of the cluster is 384 GB. A high speed Voltaire interconnect (Infiniband) connects all of these compute nodes for parallel computational jobs. The scheduler Lava manages access to 256 job slots within a single queue. The cluster operating systems is Redhat Enterprise Linux 5.5. The hardware is less than three months old. Of Note: The home directories (10 TBs) are provided by a 48 TB MSA60 disk array across the entire cluster using IPoIB, so all NFS traffic is routed across the Voltaire switch in addition to MPI traffic.

The Dell cluster consists of two login nodes (“petaltail”/swallowtail“), the Load Scheduler Facility (LSF) job scheduler, and 36 compute nodes. “petaltail” is the installer/administrative server while “swallowtail” manages commercial software licenses. Both function as login access points. Each compute node holds dual quad core (Xeon 5345, 2.3 Ghz) Dell PowerEdge 1950 with memory footprints ranging from 8 GB to 16 GB. Total memory footprint of the cluster is 340 GB. A high speed Cisco interconnect (Infiniband) connects 16 of these compute nodes for parallel computational jobs. The scheduler manages access to 288 job slots across 7 queues. The cluster operating systems is Redhat Enterprise Linux 5.1. The hardware is 3 years old.

The Blue Sky Studios (Angstrom hardware) consists of one login node (“sharptail”), the Lava job scheduler, and 46 compute nodes. Each compute node holds dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24GB. Total memory footprint of the cluster is 1.1 TB. The scheduler manages access to 92 job slots within a single queue. The cluster operating systems is CentOS 5.3. The hardware is 7 years old. Of Note: because of it's energy inefficiencies only the login node and one compute node are powered on … when jobs start pending in the queue, admins are notified automatically and more will be powered on to handle the load.

Home directory file systems are provided, and shared with the “petaltail/swallowtail” and “sharptail” clusters, by a NetApp attached storage disk array. In total, 5 TB of disk space is accessible to the users. In addition backup services are provided using the IBM Tivoli software (this will be deprecated soon).

Home directory file system are provided on the “greentail clusters” by a direct attached disk array. In total, 10 TB of disk space is accessible to the users. Backup services are provided via disk-to-disk snapshot copies on the same array. In addition, weekly home directory refresh pulls information from the NetApp disk array.


Back

cluster/95.1297183729.txt.gz · Last modified: 2011/02/08 11:48 by hmeij