Back

There is a newer version of this page at this page

Brief Description

For inclusion in proposals. This page is not maintained.

Academic High Performance Computing at Wesleyan

Wesleyan University HPC environment is comprised of two clusters: The “swallowtail” Dell hardware cluster and the “sharptail” Angstrom hardware clusters. A brief description of each follows.

Swallowtail consists of two login nodes, the Load Scheduler Facility (LSF) job scheduler, and 36 compute nodes. Each compute node is a dual quad core (Xeon 5345) Dell PowerEdge 1950 with memory footprints ranging from 8 GB to 16 GB. Total memory footprint of the cluster is 340 GB. A high speed interconnect (Infiniband) connects 16 of these compute nodes for parallel computational jobs. The scheduler manages access to 288 job slots across 7 queues. The cluster operating systems is Redhat Enterprise Linux 5.1.

Sharptail consists of one login node, the Lava job scheduler, and 129 compute nodes. Each compute node is a dual single core AMD Opteron Model 250 with a memory footprint of either 12GB or 24GB. Total memory footprint of the cluster is 1.3 TB. The scheduler manages access to 258 job slots across 4 queueus. The cluster operating systems is CentOS 5.3.

Home directory file systems are provided, and shared with both clusters, by a NetApp attached storage disk array. In total, 5 TB of disk space is accessible to the users. In addition backup services are provided using the IBM Tivoli software.


Back