User Tools

Site Tools


cluster:89

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:89 [2010/08/13 18:01]
hmeij
cluster:89 [2010/08/17 15:20]
hmeij
Line 15: Line 15:
   * Freight Elevator and pallet jack available   * Freight Elevator and pallet jack available
  
-===== Head Node =====+===== Network ===== 
 + 
 +Basically ... 
 + 
 +  * x.y.z.255 is broadcast 
 +  * x.y.z.254 is head or log in node 
 +  * x.y.z.0 is gateway 
 +  * x.y.z.<25 is for all switches and console ports 
 +  * x.y.z.25( up to 253) is for all compute nodes 
 + 
 +We are planning to ingest our Dell cluster (37 nodes) and our Blue Sky Studios cluster (130 nodes) into this setup, hence the approach. 
 + 
 +Netmask is, finally, 255.255.0.0 (excluding public 129.133 subnet). 
 + 
 +===== DM380G7 ===== 
 +[[http://h10010.www1.hp.com/wwpc/us/en/sm/WF31a/15351-15351-3328412-241644-241475-4091412.html|HP Link]] (head node)
  
   * Dual power (one to UPS, one to utility, do later)   * Dual power (one to UPS, one to utility, do later)
  
-  * hostname greentail+  * hostname [[http://www.ct.gov/dep/cwp/view.asp?A=2723&Q=325780|greentail]], another local "tail", also in reference to HP being 18-24% more efficient in power/cooling
   * eth0, provision, 192.168.102.254/255.255.0.0 (greentail-eth0, should go to better switch ProCurve 2910)   * eth0, provision, 192.168.102.254/255.255.0.0 (greentail-eth0, should go to better switch ProCurve 2910)
   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)
   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)   * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)
-  * eth3, ipmi, 10.10.103.254/255.255.0.0,  (greentail-ipmi, do later) +  * eth3, ipmi, 192.168.103.254/255.255.0.0,  (greentail-ipmi, should go to better switch ProCurve 2910, do later) 
-  * ib0, ipoib, 10.10.104.254/255.255.0.0 (greentail-ib0) +  * ib0, ipoib, 10.10.103.254/255.255.0.0 (greentail-ib0) 
-  * ib1, ipoib, 10.10.105.254/255.255.0.0 (greentail-ib1, configure, might not have cables!, split traffic across ports?)+  * ib1, ipoib, 10.10.104.254/255.255.0.0 (greentail-ib1, configure, might not have cables!, split traffic across ports?)
  
   * Raid 1 mirrored disks (2x250gb)   * Raid 1 mirrored disks (2x250gb)
Line 35: Line 50:
  
 =====  StorageWorks MSA60  ===== =====  StorageWorks MSA60  =====
 +[[http://h10010.www1.hp.com/wwpc/us/en/sm/WF25a/12169-304616-241493-241493-241493-4118559.html|HP Link]] (storage device)
  
-  * Dual power (one to UPS, one to utility) +  * Dual power (one to UPS, one to utility, do later
  
   * Three volumes to start with:    * Three volumes to start with: 
-    * home (raid 6, design a backup path, do later) +    * home (raid 6, design a backup path, do later), 10 tb 
-    * apps (raid 6, design a backup path, do later)  +    * apps (raid 6, design a backup path, do later), 1tb 
-    * sanscratch (raid 1, no backup)+    * sanscratch (raid 1, no backup), 5 tb 
 + 
 +  * Systems Insight Manager (SIM) [[http://h18013.www1.hp.com/products/servers/management/hpsim/index.html?jumpid=go/hpsim|HP Link]] (Linux Install and Configure Guide, and User Guide) 
 +    * Do we need a windows box (virtual) to run the Manager on? 
 +    * install, configure 
 +    * requires an oracle install? no, hpsmdb is installed with automatic installation (postgresql) 
 +    * linux deployment utilities, and management agents installation 
 +    * configure managed systems, automatic discovery 
 +    * configure automatic event handling
  
-  * Systems Insight Manager (SIM) 
-    * install, configure, monitor, actions 
  
 ===== SL2x170z G6 ===== ===== SL2x170z G6 =====
-[[http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c01800572&prodTypeId=18964&prodSeriesId=489496|HP Link]]+[[http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c01800572&prodTypeId=18964&prodSeriesId=489496|HP Link]] (compute nodes)
  
     * node names hp000, increment by 1     * node names hp000, increment by 1
-    * eth0, provision, 192.168.102.10(increment by 1)/255.255.0.0 (hp000-eth0, should go to better switch ProCurve 2910) +    * eth0, provision, 192.168.102.25(increment by 1)/255.255.0.0 (hp000-eth0, should go to better switch ProCurve 2910) 
-    * eth1, data/private, 10.10.102.10(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610) +    * eth1, data/private, 10.10.102.25(increment by 1)/255.255.0.0 (hp000-eth1, should go to ProCurve 2610) 
-    * eth2, ipmi, 10.10.103.10(increment by 1)/255.255.0.0, (hp000-ipmi, do later) +    * eth2, ipmi, 192.168.103.25(increment by 1)/255.255.0.0, (hp000-ipmi, should go to better switch ProCurve 2910, do later) 
-    * ib0, ipoib, 10.10.104.10(increment by 1)/255.255.0.0 (hp000-ib0) +    * ib0, ipoib, 10.10.103.25(increment by 1)/255.255.0.0 (hp000-ib0) 
-    * ib1, ipoib, 10.10.105.10(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)+    * ib1, ipoib, 10.10.104.25(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)
  
     * /home mount point for home directory volume ~ 10tb     * /home mount point for home directory volume ~ 10tb
Line 64: Line 86:
  
 ===== Misc ===== ===== Misc =====
 +
 +  * IPoIB
 +    * configuration, fine tune
 +    * monitor
  
   * Cluster Management Utility (CMU)   * Cluster Management Utility (CMU)
Line 75: Line 101:
   * KVM utility   * KVM utility
     * functionality     * functionality
 +
 +  * Placement
 +    * where in data center (do later), based on environmental works
  
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/89.txt · Last modified: 2010/11/22 19:05 by hmeij