User Tools

Site Tools


cluster:89

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:89 [2010/08/17 20:11]
hmeij
cluster:89 [2010/08/18 19:22]
hmeij
Line 22: Line 22:
     * depending on switch IP in 192.168.102.x or 10.10.102.x     * depending on switch IP in 192.168.102.x or 10.10.102.x
     * voltaire console can be stuffed in either     * voltaire console can be stuffed in either
 +
 +  * head node will be connected to our private network via a two link aggregated ethernet cables in the 10.10.x.y range so current home directories can be mounted somewhere (these dirs will not be available on the back end nodes.
  
   * x.y.z.255 is broadcast   * x.y.z.255 is broadcast
Line 42: Line 44:
     * do we need a iLo eth? in range 192.168.104.254?     * do we need a iLo eth? in range 192.168.104.254?
   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)   * eth1, data/private, 10.10.102.254/255.255.0.0 (greentail-eth1, should go to ProCurve 2610)
-  * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu)+  * eth2, public, 129.133.1.226/255.255.255.0 (greentail.wesleyan.edu, we provide cable connection)
   * eth3 (over eth2), ipmi, 192.168.103.254/255.255.0.0,  (greentail-ipmi, should go to better switch ProCurve 2910, do later)   * eth3 (over eth2), ipmi, 192.168.103.254/255.255.0.0,  (greentail-ipmi, should go to better switch ProCurve 2910, do later)
     * see discussion iLo/IPMI under CMU     * see discussion iLo/IPMI under CMU
Line 66: Line 68:
  
   * Three volumes to start with:    * Three volumes to start with: 
-    * home (raid 6, design a backup path, do later), 10 tb +    * home (raid 6), 10 tb 
-    * apps (raid 6, design a backup path, do later), 1tb +    * snapshot (raid 6), 10 tb ... see todos. 
-    * sanscratch (raid 1, no backup), 5 tb+    * sanscratch (raid 1 or 0, no backup), 5 tb
  
   * SIM   * SIM
Line 86: Line 88:
     * ib1, ipoib, 10.10.104.25(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)     * ib1, ipoib, 10.10.104.25(increment by 1)/255.255.0.0 (hp000-ib1, configure, might not have cables!)
  
-    * /home mount point for home directory volume ~ 10tb +    * /home mount point for home directory volume ~ 10tb (contains /home/apps/src) 
-    * /home/apps mount point for software volume ~ 1tb (contains /home/apps/src) +    * /snapshot mount point for snapshot volume ~ 10tb  
-    * /home/sanscratch mount point for sanscratch volume ~ 5 tb+    * /sanscratch mount point for sanscratch volume ~ 5 tb
     * logical volume LOCALSCRATCH: mount at /localscratch ~ 100 gb (60 gb left for OS)     * logical volume LOCALSCRATCH: mount at /localscratch ~ 100 gb (60 gb left for OS)
     * logical volumes ROOT/VAR/BOOT/TMP: defaults     * logical volumes ROOT/VAR/BOOT/TMP: defaults
Line 134: Line 136:
     * where in data center (do later), based on environmental works     * where in data center (do later), based on environmental works
  
 +===== ToDo =====
 +
 +All do later. After HP cluster is up.
 +
 +  * Backups.  Use trickery with linux and rsync to provide snapshots? [[http://forum.synology.com/enu/viewtopic.php?f=9&t=11471|External Link]] and another [[http://www.mikerubel.org/computers/rsync_snapshots/|External Link]]
 +    * Exclude very large files?
 +    * petaltail:/root/snapshot.sh for example
 +    * or better [[http://www.rsnapshot.org/|http://www.rsnapshot.org/]]
 +
 +  * Lava.  Install from source and evaluate.
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/89.txt · Last modified: 2010/11/22 19:05 by hmeij