User Tools

Site Tools


cluster:93

This is an old revision of the document!



Back

Greentail

Time to introduce our new high performance cluster greentail, an HP HPC solution. IF you want to read more about the details of the hardware, you can find it at Enternal Link. The reference for greentail is because this cluster consumes 18-24% less power/cooling that the competing bids. The green tail refers to the Smooth Green Snake, which no surprise, has a green tail. External Link for more information.

In order to accommodate the new cluster, we have reduced the Blue Sky Studios cluster from 3 racks in production to a single rack. That rack contains nothing but 24 gb memory nodes offering just over 1.1 TB of memory across 46 nodes. Because the cluster is not power consumption friendly, it is our “on demand” cluster. If jobs are pending in the sole bss24 queue (offering 92 job slots), we will get notified and will poer on more nodes. If it is not being used, we'll power down the nodes. The login node for this cluster is host sharptail (which can only be reached by first ssh into host petaltail or swallowtail, then ssh to sharptail.

The are no changes to the Dell cluster (petaltail/swallowtail). However be sure to read the home directory section below. It is important all users understand the impact of changes to come.

Design

The purchase of the HP hardware followed a fierce bidding round in which certain design aspects had to met.

  • We continually run of disk space for our home directories. So the new cluster had to have a large disk array on board.
  • We wanted more nodes, in fewer queues, with a decent memory footprint.
  • All nodes should be on an Infiniband switch.
  • A single queue is preferred.
  • Data (NFS) was to be served up via a secondary gigabit ethernet switch, hence not compete with administrative traffic.
  • (With the HP solution we will actually route data (NFS) traffic over the infiniband switch, a practice called IPoIB)
  • Linux or CentOS as operating system.
  • Flexible on scheduler (options: Lava, LSF, Sun Grid Engine)
  • The disk array, switches and login node should be backed by some form of UPS (not the compute nodes)
  • (We actually have moved those to our enterprise data center UPS, which is backed by building generator)

Performance

During the scheduled power outage of December 28th, 2010, some benchmarks were performed on old and new clusters. To read about the details of all that, view this page.

Home Dirs

SSH Keys

Within the directory /home/username/.ssh there is a file named authorized_keys. Within this file are your public SSH keys. Because your home directory contents are copied over to host greentail, you should be able to ssh from host petaltail or swallowtail to host greentail without a password prompt. If not, your keys are not set up properly.

You can also log in to host greentail directly (ssh username@greentail.wesleyan.edu). From host greentail should be able to to ssh to host petaltail or swallowtail without a password prompt. If not, your keys are not set up properly.

To set up your ssh keys:

  • log into a host, then issue the command ssh-keygen -t rsa
  • supply an empty passphrase (just hit return)
  • then copy the contents of /home/username/.ssh/id_rsa.pub into the file authroized_keys
  • yoiu can have multiple public ssh key entries in this file

Note: the software stack on host petaltail/swallowtail created ssh keys for you automatically upon your first login, so for most you this is all set.

To test if your keys are set up right, simply ssh around the hosts petaltail, swallowtail and greentail.

Rsnapshot

...

cluster/93.1294432882.txt.gz · Last modified: 2011/01/07 20:41 by hmeij