User Tools

Site Tools


cluster:89

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:89 [2010/08/17 18:44]
hmeij
cluster:89 [2010/08/17 20:11]
hmeij
Line 18: Line 18:
  
 Basically ... Basically ...
 +
 +  * configure all console port switches with an IP
 +    * depending on switch IP in 192.168.102.x or 10.10.102.x
 +    * voltaire console can be stuffed in either
  
   * x.y.z.255 is broadcast   * x.y.z.255 is broadcast
Line 36: Line 40:
   * hostname [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|greentail]], another local "tail", also in reference to HP being 18-24% more efficient in power/cooling   * hostname [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|greentail]], another local "tail", also in reference to HP being 18-24% more efficient in power/cooling
   * eth0, provision, 192.168.102.254/255.255.0.0 (greentail-eth0, should go to better switch ProCurve 2910)   * eth0, provision, 192.168.102.254/255.255.0.0 (greentail-eth0, should go to better switch ProCurve 2910)
-    * do we need a iLo eth? in range 192.168.104.254+    * do we need a iLo eth? in range 192.168.104.254?
   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)
   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)
Line 74: Line 78:
     * node names hp000, increment by 1     * node names hp000, increment by 1
     * eth0, provision, 192.168.102.25(increment by 1)/255.255.0.0 (hp000-eth0, should go to better switch ProCurve 2910)     * eth0, provision, 192.168.102.25(increment by 1)/255.255.0.0 (hp000-eth0, should go to better switch ProCurve 2910)
-      * do we need an iLo eth? in 192.168.4.x?+      * do we need an iLo eth? in range 192.168.104.25(increment by 1) 
 +      * CMU wants eth0 on NIC1 and PXEboot
     * eth1, data/private, 10.10.102.25(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610)     * eth1, data/private, 10.10.102.25(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610)
     * eth2 (over eth1), ipmi, 192.168.103.25(increment by 1)/255.255.0.0, (hp000-ipmi, should go to better switch ProCurve 2910, do later)     * eth2 (over eth1), ipmi, 192.168.103.25(increment by 1)/255.255.0.0, (hp000-ipmi, should go to better switch ProCurve 2910, do later)
Line 104: Line 109:
     * configure automatic event handling     * configure automatic event handling
  
-  * Cluster Management Utility (CMU)[[http://h20338.www2.hp.com/HPC/cache/412128-0-0-0-121.html|HP Link]] (Getting Started - Hardware Preparation, Setup and Install, Users Guides)+  * Cluster Management Utility (CMU)[[http://h20338.www2.hp.com/HPC/cache/412128-0-0-0-121.html|HP Link]] (Getting Started - Hardware Preparation, Setup and Install -- Installation Guide v4.2, Users Guides)
   * iLo/IPMI   * iLo/IPMI
     * HP iLo probably removes the need for IPMI, consult [[http://en.wikipedia.org/wiki/HP_Integrated_Lights-Out|External Link]], do the blades have a management card?     * HP iLo probably removes the need for IPMI, consult [[http://en.wikipedia.org/wiki/HP_Integrated_Lights-Out|External Link]], do the blades have a management card?
     * well maybe not, IPMI ([[http://en.wikipedia.org/wiki/Ipmi|External Link]]) can be scripted to power on/off, not sure about iLo (all web based)      * well maybe not, IPMI ([[http://en.wikipedia.org/wiki/Ipmi|External Link]]) can be scripted to power on/off, not sure about iLo (all web based) 
     * is head node the Management server? possibly, needs access to provision and public networks     * is head node the Management server? possibly, needs access to provision and public networks
-    * we may need a iLo subnet ... 192.198. +    * we may need a iLo eth? in range ... 192.198.104.x? Consult the Hardware Preparation Guide. 
-    * install, configure, monitor +    * CMU wants eth0 on NIC1 and PXEboot 
-    * golden image capture, deploy (there will initially only be one image)+    * install CMU management node 
 +    * install X and CMU GUI client node 
 +    * start CMU, start client, scan for nodes, build golden image 
 +    * clone nodes, deploy management agent on nodes 
 +    * install monitoring
  
   * Sun Grid Engine (SGE)   * Sun Grid Engine (SGE)
cluster/89.txt · Last modified: 2010/11/22 19:05 by hmeij