User Tools

Site Tools


cluster:88

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:88 [2010/08/11 14:11]
hmeij
cluster:88 [2010/08/17 15:56] (current)
hmeij
Line 79: Line 79:
     * name: kusu101prov,​ type: provision     * name: kusu101prov,​ type: provision
     * eth1: 10.10.101.254/​255.255.0.0     * eth1: 10.10.101.254/​255.255.0.0
-    * name: kusupriv, type: other+    * name: kusu101priv, type: other
   * 4 - gateway & dns: gateway 192.168.101.0 (is not used but required field), dns server 192.168.101.254 (installer node)   * 4 - gateway & dns: gateway 192.168.101.0 (is not used but required field), dns server 192.168.101.254 (installer node)
   * 5 - host: FQDN kusu101, PCD kusu101 (basically we will not provide internet accessible names)   * 5 - host: FQDN kusu101, PCD kusu101 (basically we will not provide internet accessible names)
Line 177: Line 177:
 Now reboot the entire cluster and observe changes to be permanent. Sidebar: for Pace, you can now on the installer node assign eth1 a pace.edu IP, and have the necessary changes made to the ProCurve switch, so your users can log into the installer/​head node.  You still only have 50 gb or so of home directory space but users can play around.  ​ Now reboot the entire cluster and observe changes to be permanent. Sidebar: for Pace, you can now on the installer node assign eth1 a pace.edu IP, and have the necessary changes made to the ProCurve switch, so your users can log into the installer/​head node.  You still only have 50 gb or so of home directory space but users can play around.  ​
  
 +Actually had a better idea: create another node group template from your _BSS template and remove eth1, naming convention login#N and set starting IP to something like 192.168.101.10 ... call this node group _BSS_login or so.  Start addhost, add new host to this node group. ​ Next manually add eth1 with IP in pace.edu and wire up via switch to outside world. ​ Next add this host to the list of LSF_MASTER_LIST. ​ Now users can log into this node and submit jobs and stay out of your way on the installer node. 
  
 ===== Step 5 ===== ===== Step 5 =====
Line 266: Line 267:
 More fun. Parallel jobs can be submitted over ethernet interconnects but will not achieve the performance of Infiniband interconnects ofcourse. ​ OpenMPI is a nice MPI flavor because software compiled with it automatically detects if the host has an HCA card or not and will allocate the appropriate libraries. So in order to compile, or run, some OpenMPI examples we need the following: More fun. Parallel jobs can be submitted over ethernet interconnects but will not achieve the performance of Infiniband interconnects ofcourse. ​ OpenMPI is a nice MPI flavor because software compiled with it automatically detects if the host has an HCA card or not and will allocate the appropriate libraries. So in order to compile, or run, some OpenMPI examples we need the following:
  
-  * yum install libibverbspdsh yum install libibverbs -q -y+  * yum install libibverbs 
 +  * pdsh yum install libibverbs -q -y
   * yum install gcc-c++   * yum install gcc-c++
  
Line 272: Line 274:
  
   * download tarball, stage in /​home/​apps/​src   * download tarball, stage in /​home/​apps/​src
-  * cd /opt; tar zxvf /​home/​apps/​src/​mpis.tar.gzpdsh "cd /opt; tar zxvf /​home/​apps/​src/​mpis.tar.gz"​+  * cd /opt; tar zxvf /​home/​apps/​src/​mpis.tar.gz 
 +  * pdsh "cd /opt; tar zxvf /​home/​apps/​src/​mpis.tar.gz"​
   * examples in /​opt/​openmpi/​gnu/​examples have been compiled like so:   * examples in /​opt/​openmpi/​gnu/​examples have been compiled like so:
     * export PATH=/​opt/​openmpi/​gnu/​bin:​$PATH     * export PATH=/​opt/​openmpi/​gnu/​bin:​$PATH
cluster/88.1281550303.txt.gz · Last modified: 2010/08/11 14:11 by hmeij