\\ **[[cluster:0|Back]]** * Jobs can be submitted from any node. * ''cottontail'' is primary scheduler login node. * You can login in to any **tail** node directly (via ssh). * All nodes are CentOS 6.x with some exceptions noted. ==== Test Queue ==== * Wall time (CPULIMIT) has been removed (was 8 hrs/job) * Job slots raised to 48, is comprised of * ''greentail'' our /sancratch provider and redhat 5.5 * ''cottontail2'' physically identical to ''greentail'' for fail over * standby scheduler master for ''cottontail'' * ''swallowtail'' secondary login node * ''petaltail'', sandbox and Warewolf provisioning node * ''n29'' and ''n31'', our OpenHPC test cluster compute nodes, CentOS 7.3 * joined to our Openlava environment You can target individual nodes with '#BSUB -m node-name' in your submit script. ==== DR ==== * ''sharptail2dr'' is our disaster recovery node (to back up ''sharptail'''s /home) * Every couple of days users' /home will be refreshed on this DR node * 17TB of usable storage * Users selected for first or refresh backup are obtained from job submission logs last 30 days * No logins are permitted * Node will eventually be moved to Freeman DR location * If users become inactive, backup home directory will not be removed (while space is available) * vlan52 bonded interfaces (2G connection) \\ **[[cluster:0|Back]]**