User Tools

Site Tools


cluster:93

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:93 [2011/01/10 21:50]
hmeij
cluster:93 [2011/01/11 15:40]
hmeij
Line 8: Line 8:
 ====== Greentail ====== ====== Greentail ======
  
-Time to introduce our new high performance cluster ''greentail'', an Hewlett Packard HPC solution.  If you want to read more about the details of the hardware, you can find it at [[https://dokuwiki.wesleyan.edu/doku.php?id=cluster:83#round_2_of_quotes|Enternal Link]]. The reference for ''greentail'' is because this cluster consumes 18-24% less power/cooling than the competing bids.  The green tail refers to the **Smooth Green Snake**, which no surprise, has a green tail.  [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|External Link]] for more information.+Time to introduce our new high performance cluster ''greentail'', an Hewlett Packard HPC solution.  If you want to read more about the details of the hardware, you can find it at [[https://dokuwiki.wesleyan.edu/doku.php?id=cluster:83#round_2_of_quotes|Enternal Link]]. The name refers to the **Smooth Green Snake**, which no surprise, has a green tail.  [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|External Link]] for more information.  The reference for ''greentail'' is because this cluster consumes 18-24% less power/cooling than the competing bids
  
 In order to accommodate the new cluster, we have reduced the Blue Sky Studios cluster from 3 racks in production to a single rack.  That rack contains nothing but 24 gb memory nodes offering just over 1.1 TB of memory across 46 nodes.  Because the cluster is not power consumption friendly, it is our "on demand" cluster.  If jobs are pending in the sole ''bss24'' queue (offering 92 job slots), we will get notified and will power on more nodes.  Or just email us. If it is not being used, we'll power down the nodes.  The login node for this cluster is host sharptail (which can only be reached by first ssh into host petaltail or swallowtail, then ssh to sharptail). In order to accommodate the new cluster, we have reduced the Blue Sky Studios cluster from 3 racks in production to a single rack.  That rack contains nothing but 24 gb memory nodes offering just over 1.1 TB of memory across 46 nodes.  Because the cluster is not power consumption friendly, it is our "on demand" cluster.  If jobs are pending in the sole ''bss24'' queue (offering 92 job slots), we will get notified and will power on more nodes.  Or just email us. If it is not being used, we'll power down the nodes.  The login node for this cluster is host sharptail (which can only be reached by first ssh into host petaltail or swallowtail, then ssh to sharptail).
  
-There are no changes to the Dell cluster (petaltail/swallowtail).  However be sure to read the home directory section below.  __It is important all users understand the impact of changes to come.__+There are no changes to the Dell cluster (petaltail/swallowtail).  However be sure to read the home directory section below.  __It is important that all users understand the impact of changes to come.__
  
-If we like the HP management tools, in the future we may ingest cluster petaltail/swallowtail and sharptail into greentail for a single point of access.  Regardless of that move, the home directories will be served by greentail.  That is a significant change. More details below.+If we like the HP management tools, in the future we may ingest cluster petaltail/swallowtail and sharptail into greentail for a single point of access.  Regardless of that move, the home directories will be served by greentail in the future.  That is a significant change. More details below.
  
 As always, suggestions welcome. As always, suggestions welcome.
Line 23: Line 23:
  
   * We continually run out of disk space for our home directories.  So the new cluster had to have a large disk array on board.   * We continually run out of disk space for our home directories.  So the new cluster had to have a large disk array on board.
-  * We wanted more nodes, in fewer queues, with a decent memory footprint.+  * We wanted more nodes,  with a decent memory footprint (settled on 12 gb per node).
   * All nodes should be on an Infiniband switch.   * All nodes should be on an Infiniband switch.
   * A single queue is preferred.   * A single queue is preferred.
cluster/93.txt ยท Last modified: 2011/01/11 20:55 by hmeij