User Tools

Site Tools


cluster:160

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:160 [2017/04/05 15:20]
hmeij07 [OpenHPC page 4]
cluster:160 [2017/05/31 15:07] (current)
hmeij07 [OpenHPC page 4]
Line 19: Line 19:
 </code> </code>
  
-Next we import a template file in which the IPADDR and NETMASK values of the ''ib0'' interface will be replaced with values from the database the database. Add to your deploy scripts lines like+Next we import a template file in which the IPADDR and NETMASK values of the ''ib0'' interface will be replaced with values from the database. Add to your deploy scripts lines like
  
 <code> <code>
 +
 +wwsh file import /opt/ohpc/pub/examples/network/centos/ifcfg-ib0.ww
 +wwsh -y file set ifcfg-ib0.ww --path=/etc/sysconfig/network-scripts/ifcfg-ib0
 +
  
  wwsh node set $node --netdev=ib0 \  wwsh node set $node --netdev=ib0 \
Line 31: Line 35:
 </code> </code>
  
 +Reassemble the VNFS and reimage nodes. Now you can follow IpoIB instructions [[cluster:145|Infiniband]]
 +
 +Then add these lines to ~test/.bashrc file and resubmit job.mpi and you'll notice we now run MPI over Infiniband.
  
 <code> <code>
  
-wwsh file import /opt/ohpc/pub/examples/network/centos/ifcfg-ib0.ww +# User specific aliases and functions 
-wwsh -y file set ifcfg-ib0.ww --path=/etc/sysconfig/network-scripts/ifcfg-ib0+module load gnu/5.4.0 
 +module load openmpi/1.10.4 
 +module load prun/1.1 
 +module list
  
-</code>+# job.102.out 
 +/opt/ohpc/pub/prun/1.1/prun 
 +[prun] Master compute host = n29 
 +[prun] Resource manager = slurm 
 +[prun] Launch cmd = mpirun ./a.out
  
 + Hello, world (8 procs total)
 +    --> Process #   0 of   8 is alive. -> n29.localdomain
 +    --> Process #   1 of   8 is alive. -> n29.localdomain
 +    --> Process #   2 of   8 is alive. -> n29.localdomain
 +    --> Process #   3 of   8 is alive. -> n29.localdomain
 +    --> Process #   4 of   8 is alive. -> n31.localdomain
 +    --> Process #   5 of   8 is alive. -> n31.localdomain
 +    --> Process #   6 of   8 is alive. -> n31.localdomain
 +    --> Process #   7 of   8 is alive. -> n31.localdomain
 +
 +</code>
  
  
Line 50: Line 75:
 [[cluster:154|OpenHPC page 1]] - [[cluster:155|OpenHPC page 2]] - [[cluster:156|OpenHPC page 3]] - page 4 [[cluster:154|OpenHPC page 1]] - [[cluster:155|OpenHPC page 2]] - [[cluster:156|OpenHPC page 3]] - page 4
  
 + --- //[[hmeij@wesleyan.edu|Henk]] 2017/05/30 11:03//
 +
 +**POSTFIX NOTE**
 +
 +In order to have users be able to send email from the jobs (from inside their jobs, like job progress reports), install mailx on the nodes. in ''/etc/ssmtp/ssmtp.conf'' define the relayhost (like sms-eth0-private) as the "mailhub". No other changes needed.
 + 
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/160.1491405625.txt.gz · Last modified: 2017/04/05 15:20 by hmeij07