Back
This outdated page replace by Brief Guide to HPCC
— Meij, Henk 2014/02/21 10:23
Updated
— Meij, Henk 2013/09/10 14:42
New Configuration
The Academic High Performance Compute Cluster is comprised of two login nodes (greentail and swallowtail, both Dell PowerEdge 2050s). Old login node petaltail (Dell PowerEdge 2950) can be used for testing code (does not matter if it crashes, it's primary duty is backup to physical tape library).
Several types of compute node “clusters” are available via the Lava scheduler:
32 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12
GB each, all on infiniband (QDR) interconnects. 288 job slots. Total memory footprint of these nodes is 384
GB. This cluster has been measured at 1.5 teraflops (using Linpack). Known as the HP cluster, or the n-nodes (n1-n32).
30 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8
GB to 16
GB. 256 job slots. Total memory footprint of these nodes is 340
GB. Only 16 nodes are on infiniband (SDR) interconnects, rest on gigabit ethernet switches. This cluster has been measured at 665 gigaflops (using Linpack). Known as the Dell cluster or the c-nodes (c00-c36…some have failed)
25 nodes with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24
GB each. 50 job slots. Total memory footprint of the cluster is 600
GB. This cluster has an estimated capacity of 250-350 gigaflops. (can grow to 45 nodes, 90 jobs slots, and 1.1 TB memory). Known as the Blue Sky Studio cluster or the b-nodes (b20-b45).
5 nodes with dual eight core E5-2660 Intel Xeon sockets (2.2 Ghz) in ASUS/Supermicro rack servers with a memory footprint of 256
GB each (1.28 TB). Hyperthreading is turned on doubling the core count to 32/node (120 job slots for regular HPC). The nodes also contain 4 GPUs per node (total of 20) with 20 reserved CPU cores (job slots). Known as the Microway or GPU-HPC cluster (n33-n37)
All queues are available for job submissions via the login nodes greentail and swallowtail; both nodes service all queues. Our total job slots is now 734 of which 400 are on infiniband switches for parallel computational jobs.
Home directory file system are provided (via NFS or IPoIB) by the login node “sharptail” (to come) from a direct attached disk array (48 TB). In total, 10 TB of /home disk space is accessible to the users and 5 TB of scratch space at /sanscratch. In addition all nodes provide a small /localscratch disk space on the nodes local internal disk if file locking is needed (about 50 GB). Backup services are provided via disk-to-disk snapshot copies on the same array. In addition, the entire disk array on sharptail is rsync'ed to the 48 TB disk array on greentail. (Yet to be deployed 09sep13).
The 25 nodes clusters listed above also runs our Hadoop cluster. The namenode and login node is whitetail and also contains the scheduler for Hadoop. It is based on Cloudera CD3U6 repository.
Back