User Tools

Site Tools


cluster:89

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:89 [2010/08/14 01:52]
hmeij
cluster:89 [2010/08/17 15:31]
hmeij
Line 23: Line 23:
   * x.y.z.0 is gateway   * x.y.z.0 is gateway
   * x.y.z.<25 is for all switches and console ports   * x.y.z.<25 is for all switches and console ports
-  * x.y.z.25(and up, but less than <= 250) is for all compute nodes+  * x.y.z.25( up to 253) is for all compute nodes
  
 We are planning to ingest our Dell cluster (37 nodes) and our Blue Sky Studios cluster (130 nodes) into this setup, hence the approach. We are planning to ingest our Dell cluster (37 nodes) and our Blue Sky Studios cluster (130 nodes) into this setup, hence the approach.
  
-Netmask is, finally, 255.255.0.0 (banning public 129.133 subnet).+Netmask is, finally, 255.255.0.0 (excluding public 129.133 subnet).
  
 ===== DM380G7 ===== ===== DM380G7 =====
Line 34: Line 34:
   * Dual power (one to UPS, one to utility, do later)   * Dual power (one to UPS, one to utility, do later)
  
-  * hostname [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|greentail]], another local "tail", also in reference to HP burning 18-24% more efficient in power/cooling+  * hostname [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|greentail]], another local "tail", also in reference to HP being 18-24% more efficient in power/cooling
   * eth0, provision, 192.168.102.254/255.255.0.0 (greentail-eth0, should go to better switch ProCurve 2910)   * eth0, provision, 192.168.102.254/255.255.0.0 (greentail-eth0, should go to better switch ProCurve 2910)
   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)
   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)
-  * eth3, ipmi, 10.10.103.254/255.255.0.0,  (greentail-ipmi, do later) +  * eth3, ipmi, 192.168.103.254/255.255.0.0,  (greentail-ipmi, should go to better switch ProCurve 2910, do later) 
-  * ib0, ipoib, 10.10.104.254/255.255.0.0 (greentail-ib0) +  * ib0, ipoib, 10.10.103.254/255.255.0.0 (greentail-ib0) 
-  * ib1, ipoib, 10.10.105.254/255.255.0.0 (greentail-ib1, configure, might not have cables!, split traffic across ports?)+  * ib1, ipoib, 10.10.104.254/255.255.0.0 (greentail-ib1, configure, might not have cables!, split traffic across ports?)
  
   * Raid 1 mirrored disks (2x250gb)   * Raid 1 mirrored disks (2x250gb)
Line 59: Line 59:
     * sanscratch (raid 1, no backup), 5 tb     * sanscratch (raid 1, no backup), 5 tb
  
-  * Systems Insight Manager (SIM) +
-    * install, configure, monitor +
-    * event actions+
  
  
Line 70: Line 68:
     * eth0, provision, 192.168.102.25(increment by 1)/255.255.0.0 (hp000-eth0, should go to better switch ProCurve 2910)     * eth0, provision, 192.168.102.25(increment by 1)/255.255.0.0 (hp000-eth0, should go to better switch ProCurve 2910)
     * eth1, data/private, 10.10.102.25(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610)     * eth1, data/private, 10.10.102.25(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610)
-    * eth2, ipmi, 10.10.103.25(increment by 1)/255.255.0.0, (hp000-ipmi, do later) +    * eth2, ipmi, 192.168.103.25(increment by 1)/255.255.0.0, (hp000-ipmi, should go to better switch ProCurve 2910, do later) 
-    * ib0, ipoib, 10.10.104.25(increment by 1)/255.255.0.0 (hp000-ib0) +    * ib0, ipoib, 10.10.103.25(increment by 1)/255.255.0.0 (hp000-ib0) 
-    * ib1, ipoib, 10.10.105.25(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)+    * ib1, ipoib, 10.10.104.25(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)
  
     * /home mount point for home directory volume ~ 10tb     * /home mount point for home directory volume ~ 10tb
Line 82: Line 80:
  
 ===== Misc ===== ===== Misc =====
 +
 +  * Systems Insight Manager (SIM) [[http://h18013.www1.hp.com/products/servers/management/hpsim/index.html?jumpid=go/hpsim|HP Link]] (Linux Install and Configure Guide, and User Guide)
 +    * Do we need a windows box (virtual) to run the Central Management Server on?
 +    * install, configure
 +    * requires an oracle install? no, hpsmdb is installed with automatic installation (postgresql)
 +    * linux deployment utilities, and management agents installation
 +    * configure managed systems, automatic discovery
 +    * configure automatic event handling
  
   * IPoIB   * IPoIB
cluster/89.txt · Last modified: 2010/11/22 19:05 by hmeij