This is an old revision of the document!
The Academic High Performance Compute Cluster is comprised of two login nodes (greentail and swallowtail, both Dell PowerEdge 2050s). Old login node petaltail (Dell PowerEdge 2950) can be used for testing code (does not matter if it crashes, it's primary duty is backup to physical tape library).
Three types of compute nodes are available via the Lava scheduler:
All queues are available for job submissions via the login nodes greentail and swallowtail; both nodes service all queues. Our total job slots is now 634 of which 380 are on infiniband switches for parallel computational jobs. In addition queue “bss24” consists of 90 job slots (45 nodes) which can provide access to 1.1 TB of memory; it is turned on by request (nodes are power inefficient).
Home directory file system are provided (via NFS or IPoIB) by the login node “greentail” from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users and 5 TB of scratch space at /sanscratch. In addition all nodes provide a small /localscratch disk space on the nodes local internal disk if file locking is needed (about 50 GB). Backup services are provided via disk-to-disk snapshot copies on the same array.