This is an old revision of the document!
Time to introduce our new high performance cluster greentail
, an Hewlett Packard HPC solution. If you want to read more about the details of the hardware, you can find it at Enternal Link. The reference for greentail
is because this cluster consumes 18-24% less power/cooling than the competing bids. The green tail refers to the Smooth Green Snake, which no surprise, has a green tail. External Link for more information.
In order to accommodate the new cluster, we have reduced the Blue Sky Studios cluster from 3 racks in production to a single rack. That rack contains nothing but 24 gb memory nodes offering just over 1.1 TB of memory across 46 nodes. Because the cluster is not power consumption friendly, it is our “on demand” cluster. If jobs are pending in the sole bss24
queue (offering 92 job slots), we will get notified and will power on more nodes. Or just email us. If it is not being used, we'll power down the nodes. The login node for this cluster is host sharptail (which can only be reached by first ssh into host petaltail or swallowtail, then ssh to sharptail).
There are no changes to the Dell cluster (petaltail/swallowtail). However be sure to read the home directory section below. It is important all users understand the impact of changes to come.
If we like the HP management tools, in the future we may ingest cluster petaltail/swallowtail and sharptail into greentail for a single point of access. Regardless of that move, the home directories will be served by greentail. That is a significant change. More details below.
As always, suggestions welcome.
The purchase of the HP hardware followed a fierce bidding round in which certain design aspects had to met.
During the scheduled power outage of December 28th, 2010, some benchmarks were performed on old and new clusters. To read about the details of all that, view this page.
In short using linpack (More about Linpack on wikipedia) here are the results. The results are dependent on the combination of total memory, total cores, and speed of processors.
The home directory disk space (5 TB) on the clusters is served up via NFS from one of our data center NetApp storage servers (named filer3). (Lets refer to those as “old home dirs”). We will be migrating off filer3 to greentail's local disk array. The path will remain the same on greentail: /home/username. (Lets refer to those as “new home dirs”).
In order to do this, your old home directory content will be copied weekly from filer3 to greentail's disk array. When you create new files in your old home dirs they will show up on greentail's new home dirs. However, if you delete files in old home dirs, and they have already been copied over, the files will remain in your new home dirs. If you create new files in greentail's new home dirs they will not be copied back to your old home dirs.
To avoid a conflict between home dirs I strongly suggest you create a directory to store the files you will be creating on greentail, for example /home/username/greentail or /home/username/hp.
At some point in the future, greentail's new home dirs will be mounted on the petaltail/swallowtail and sharptail clusters. Filer3's old home dirs will then disappear permanently.
Greentail's new home dirs will provide 10 TB of disk space. Again, the clusters file system should not be used to archive data. However, doubling the home directory size should provide much needed relief.
Because of the size of the new home dirs, we will also not be able to provide backup via TSM (Tivoli). Backup via TSM to our Virtual Tape Library (VTL) will be replaced with disk to disk backup on greentail's disk array. That has some serious implications. Please read the section about RSnapshot.
The password, shadow and group files of host petaltail were used to populate greentail's equivalent files.
If you change your password, do it on all four hosts (petaltail, swallowtail, sharptail and greentail).
I know, a pain.
Within the directory /home/username/.ssh there is a file named authorized_keys. Within this file are your public SSH keys. Because your home directory contents are copied over to host greentail, you should be able to ssh from host petaltail or swallowtail to host greentail without a password prompt. If not, your keys are not set up properly.
You can also log in to host greentail directly (ssh username@greentail.wesleyan.edu
). From host greentail should be able to to ssh to host petaltail or swallowtail without a password prompt. If not, your keys are not set up properly.
To set up your ssh keys:
ssh-keygen -t rsa
Note: the software stack on host petaltail/swallowtail created ssh keys for you automatically upon your first login, so for most of you this is all set.
To test if your keys are set up right, simply ssh around the hosts petaltail, swallowtail and greentail.