Back

This outdated page replace by Brief Guide to HPCCMeij, Henk 2014/02/21 10:23

Updated — Meij, Henk 2013/09/10 14:42

New Configuration

The Academic High Performance Compute Cluster is comprised of two login nodes (greentail and swallowtail, both Dell PowerEdge 2050s). Old login node petaltail (Dell PowerEdge 2950) can be used for testing code (does not matter if it crashes, it's primary duty is backup to physical tape library).

Several types of compute node “clusters” are available via the Lava scheduler:

All queues are available for job submissions via the login nodes greentail and swallowtail; both nodes service all queues. Our total job slots is now 734 of which 400 are on infiniband switches for parallel computational jobs.

Home directory file system are provided (via NFS or IPoIB) by the login node “sharptail” (to come) from a direct attached disk array (48 TB). In total, 10 TB of /home disk space is accessible to the users and 5 TB of scratch space at /sanscratch. In addition all nodes provide a small /localscratch disk space on the nodes local internal disk if file locking is needed (about 50 GB). Backup services are provided via disk-to-disk snapshot copies on the same array. In addition, the entire disk array on sharptail is rsync'ed to the 48 TB disk array on greentail. (Yet to be deployed 09sep13).

The 25 nodes clusters listed above also runs our Hadoop cluster. The namenode and login node is whitetail and also contains the scheduler for Hadoop. It is based on Cloudera CD3U6 repository.


Back