User Tools

Site Tools


cluster:93

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:93 [2011/01/10 16:07]
hmeij
cluster:93 [2011/01/11 15:37]
hmeij
Line 8: Line 8:
 ====== Greentail ====== ====== Greentail ======
  
-Time to introduce our new high performance cluster ''greentail'', an Hewlett Packard HPC solution.  If you want to read more about the details of the hardware, you can find it at [[https://dokuwiki.wesleyan.edu/doku.php?id=cluster:83#round_2_of_quotes|Enternal Link]]. The reference for ''greentail'' is because this cluster consumes 18-24% less power/cooling than the competing bids.  The green tail refers to the **Smooth Green Snake**, which no surprise, has a green tail.  [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|External Link]] for more information.+Time to introduce our new high performance cluster ''greentail'', an Hewlett Packard HPC solution.  If you want to read more about the details of the hardware, you can find it at [[https://dokuwiki.wesleyan.edu/doku.php?id=cluster:83#round_2_of_quotes|Enternal Link]]. The name refers to the **Smooth Green Snake**, which no surprise, has a green tail.  [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|External Link]] for more information.  The reference for ''greentail'' is because this cluster consumes 18-24% less power/cooling than the competing bids
  
 In order to accommodate the new cluster, we have reduced the Blue Sky Studios cluster from 3 racks in production to a single rack.  That rack contains nothing but 24 gb memory nodes offering just over 1.1 TB of memory across 46 nodes.  Because the cluster is not power consumption friendly, it is our "on demand" cluster.  If jobs are pending in the sole ''bss24'' queue (offering 92 job slots), we will get notified and will power on more nodes.  Or just email us. If it is not being used, we'll power down the nodes.  The login node for this cluster is host sharptail (which can only be reached by first ssh into host petaltail or swallowtail, then ssh to sharptail). In order to accommodate the new cluster, we have reduced the Blue Sky Studios cluster from 3 racks in production to a single rack.  That rack contains nothing but 24 gb memory nodes offering just over 1.1 TB of memory across 46 nodes.  Because the cluster is not power consumption friendly, it is our "on demand" cluster.  If jobs are pending in the sole ''bss24'' queue (offering 92 job slots), we will get notified and will power on more nodes.  Or just email us. If it is not being used, we'll power down the nodes.  The login node for this cluster is host sharptail (which can only be reached by first ssh into host petaltail or swallowtail, then ssh to sharptail).
Line 20: Line 20:
 ===== Design ===== ===== Design =====
  
-The purchase of the HP hardware followed a fierce bidding round in which certain design aspects had to met.+The purchase of the HP hardware followed a fierce bidding round in which certain design aspects had to be met.
  
   * We continually run out of disk space for our home directories.  So the new cluster had to have a large disk array on board.   * We continually run out of disk space for our home directories.  So the new cluster had to have a large disk array on board.
Line 51: Line 51:
 The home directory disk space (5 TB) on the clusters is served up via NFS from one of our data center NetApp storage servers (named filer3).  (Lets refer to those as "old home dirs"). We will be migrating off filer3 to greentail's local disk array.  The path will remain the same on greentail: /home/username. (Lets refer to those as "new home dirs"). The home directory disk space (5 TB) on the clusters is served up via NFS from one of our data center NetApp storage servers (named filer3).  (Lets refer to those as "old home dirs"). We will be migrating off filer3 to greentail's local disk array.  The path will remain the same on greentail: /home/username. (Lets refer to those as "new home dirs").
  
-In order to do this, your old home directory content will be copied weekly from filer3 to greentail's disk array.  When you create new files in your old home dirs they will show up on greentail's new home dirs.  However, if you delete files in old home dirs, and they have already been copied over, the files will remain in your new home dirs.  If you create new files in greentail's new home dirs they will **not** be copied back to your old home dirs.+In order to do this, your old home directory content was copied over christmas-newyears break.  Since then, it will be copied weekly from filer3 to greentail's disk array.  When you create new files in your old home dirs they will show up on greentail's new home dirs.  However, if you delete files in old home dirs, and they have already been copied over, the files will remain in your new home dirs.  If you create new files in greentail's new home dirs they will **not** be copied back to your old home dirs.
  
 To avoid a conflict between home dirs I strongly suggest you create a directory to store the files you will be creating on greentail, for example /home/username/greentail or /home/username/hp. To avoid a conflict between home dirs I strongly suggest you create a directory to store the files you will be creating on greentail, for example /home/username/greentail or /home/username/hp.
Line 73: Line 73:
 ===== SSH Keys ===== ===== SSH Keys =====
  
-Within the directory **/home/username/.ssh** there is a file named **authorized_keys**.  Within this file are your public SSH keys.  Because your home directory contents are copied over to host greentail, you should be able to ssh from host petaltail or swallowtail to host greentail without a password prompt.  If not, your keys are not set up properly.+Within the directory **/home/username/.ssh** there is a file named **known_hosts**.  Within this file are host level public SSH keys.  Because your home directory contents are copied over to host greentail, you should be able to ssh from host petaltail or swallowtail to host greentail without a password prompt.  If not, your keys are not set up properly.
  
 You can also log in to host greentail directly (''ssh username@greentail.wesleyan.edu'').  From host greentail should be able to to ssh to host petaltail or swallowtail without a password prompt. If not, your keys are not set up properly. You can also log in to host greentail directly (''ssh username@greentail.wesleyan.edu'').  From host greentail should be able to to ssh to host petaltail or swallowtail without a password prompt. If not, your keys are not set up properly.
  
-To set up your ssh keys:+ 
 +Note: the software stack on host petaltail/swallowtail created ssh keys for you automatically upon your first login, so for most of you this is all set.  To set up your private/public ssh keys:
  
   * log into a host, then issue the command ''ssh-keygen -t rsa''   * log into a host, then issue the command ''ssh-keygen -t rsa''
Line 84: Line 85:
   * you can have multiple public ssh key entries in this file   * you can have multiple public ssh key entries in this file
  
-Note: the software stack on host petaltail/swallowtail created ssh keys for you automatically upon your first login, so for most of you this is all set. 
- 
-To test if your keys are set up right, simply ssh around the hosts petaltail, swallowtail and greentail. 
  
 ===== Rsnapshot ===== ===== Rsnapshot =====
Line 112: Line 110:
 ===== MPI ===== ===== MPI =====
  
 +For those of you running MPI or MPI enabled applications, you will need to make some changes to your scripts.  The ''wrapper'' program to use with greentail's Lava scheduler is the same as for cluster sharptail. It can be found here:  /share/apps/bin/lava.openmpi.mpirun.   If other flavors are desired, you can inform me or look look at the example scripts lava.//mpi_flavor//.mpi[run|exec].
 +
 +Sometime ago I wrote some code to detect if a node is infiniband enabled or not, and based on the result, add command line arguments to the mpirun invocation.  If you use that code, you will need to change:  the path to obtain the port status (/usr/bin/ibv_devinfo) and in the block specify the interface change eth1 to ib0.
  
 ===== ... ===== ===== ... =====
  
 |{{:cluster:swallowtail.jpg|}}|{{:cluster:petaltail.jpg|}}|{{:cluster:sharptail.jpg|}}| |{{:cluster:swallowtail.jpg|}}|{{:cluster:petaltail.jpg|}}|{{:cluster:sharptail.jpg|}}|
-|  swallowtail  |  petaltail  | sharptail  |+|  swallowtail  |  petaltail  sharptail  |
  
  
cluster/93.txt · Last modified: 2011/01/11 20:55 by hmeij