User Tools

Site Tools


cluster:89

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:89 [2010/08/16 17:44]
hmeij
cluster:89 [2010/08/17 18:42]
hmeij
Line 38: Line 38:
   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)
   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)
-  * eth3, ipmi, 192.168.103.254/255.255.0.0,  (greentail-ipmi, should go to better switch ProCurve 2910, do later)+  * eth3 (over eth2), ipmi, 192.168.103.254/255.255.0.0,  (greentail-ipmi, should go to better switch ProCurve 2910, do later) 
 +    * see discussion iLo/IPMI under CMU
   * ib0, ipoib, 10.10.103.254/255.255.0.0 (greentail-ib0)   * ib0, ipoib, 10.10.103.254/255.255.0.0 (greentail-ib0)
   * ib1, ipoib, 10.10.104.254/255.255.0.0 (greentail-ib1, configure, might not have cables!, split traffic across ports?)   * ib1, ipoib, 10.10.104.254/255.255.0.0 (greentail-ib1, configure, might not have cables!, split traffic across ports?)
Line 48: Line 49:
   * logical volume LOCALSCRATCH: mount at /localscratch ~ 100 gb (should match nodes at 160 gb, leave rest for OS)   * logical volume LOCALSCRATCH: mount at /localscratch ~ 100 gb (should match nodes at 160 gb, leave rest for OS)
   * logical volumes ROOT/VAR/BOOT/TMP: defaults   * logical volumes ROOT/VAR/BOOT/TMP: defaults
 +
 +  * IPoIB configuration
 +  * SIM configuration
 +  * CMU configuration
 +  * SGE configuration
  
 =====  StorageWorks MSA60  ===== =====  StorageWorks MSA60  =====
Line 59: Line 65:
     * sanscratch (raid 1, no backup), 5 tb     * sanscratch (raid 1, no backup), 5 tb
  
-  * Systems Insight Manager (SIM+  * SIM
-    * install, configure, monitor +
-    * event actions+
  
  
Line 71: Line 75:
     * eth1, data/private, 10.10.102.25(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610)     * eth1, data/private, 10.10.102.25(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610)
     * eth2, ipmi, 192.168.103.25(increment by 1)/255.255.0.0, (hp000-ipmi, should go to better switch ProCurve 2910, do later)     * eth2, ipmi, 192.168.103.25(increment by 1)/255.255.0.0, (hp000-ipmi, should go to better switch ProCurve 2910, do later)
 +      * see discussion iLo/IPMI under CMU
     * ib0, ipoib, 10.10.103.25(increment by 1)/255.255.0.0 (hp000-ib0)     * ib0, ipoib, 10.10.103.25(increment by 1)/255.255.0.0 (hp000-ib0)
     * ib1, ipoib, 10.10.104.25(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)     * ib1, ipoib, 10.10.104.25(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)
Line 80: Line 85:
     * logical volumes ROOT/VAR/BOOT/TMP: defaults     * logical volumes ROOT/VAR/BOOT/TMP: defaults
  
 +  * SIM
  
 ===== Misc ===== ===== Misc =====
Line 87: Line 93:
     * monitor     * monitor
  
-  * Cluster Management Utility (CMU)+  * Systems Insight Manager (SIM) [[http://h18013.www1.hp.com/products/servers/management/hpsim/index.html?jumpid=go/hpsim|HP Link]] (Linux Install and Configure Guide, and User Guide) 
 +    * Do we need a windows box (virtual) to run the Central Management Server on? 
 +    * SIM + Cluster Monitor (MSCS)? 
 +    * install, configure 
 +    * requires an oracle install? no, hpsmdb is installed with automatic installation (postgresql) 
 +    * linux deployment utilities, and management agents installation 
 +    * configure managed systems, automatic discovery 
 +    * configure automatic event handling 
 + 
 +  * Cluster Management Utility (CMU)[[http://h20338.www2.hp.com/HPC/cache/412128-0-0-0-121.html|HP Link]] (Getting Started - Hardware Preparation, Setup and Install, Users Guides) 
 +  * iLo/IPMI 
 +    * HP iLo probably removes the need for IPMI, consult [[http://en.wikipedia.org/wiki/HP_Integrated_Lights-Out|External Link]], do the blades have a management card? 
 +    * well maybe not, IPMI ([[http://en.wikipedia.org/wiki/Ipmi|External Link]]) can be scripted to power on/off, not sure about iLo (all web based)  
 +    * is head node the Management server? possibly, needs access to provision and public networks 
 +    * we may need a iLo subnet ... 192.198.
     * install, configure, monitor     * install, configure, monitor
     * golden image capture, deploy (there will initially only be one image)     * golden image capture, deploy (there will initially only be one image)
Line 94: Line 114:
     * install, configure     * install, configure
     * there will only be one queue (hp12)     * there will only be one queue (hp12)
 +
 +===== Other =====
  
   * KVM utility   * KVM utility
cluster/89.txt · Last modified: 2010/11/22 19:05 by hmeij