Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:160 [DokuWiki]

User Tools

Site Tools


cluster:160

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
cluster:160 [2017/04/05 11:21]
hmeij07 [OpenHPC page 4]
cluster:160 [2017/05/30 11:07]
hmeij07
Line 19: Line 19:
 </code> </code>
  
-Next we import a template file in which the IPADDR and NETMASK values of the ''ib0'' interface will be replaced with values from the database the database. Add to your deploy scripts lines like+Next we import a template file in which the IPADDR and NETMASK values of the ''ib0'' interface will be replaced with values from the database. Add to your deploy scripts lines like
  
 <code> <code>
 +
 +wwsh file import /opt/ohpc/pub/examples/network/centos/ifcfg-ib0.ww
 +wwsh -y file set ifcfg-ib0.ww --path=/etc/sysconfig/network-scripts/ifcfg-ib0
 +
  
  wwsh node set $node --netdev=ib0 \  wwsh node set $node --netdev=ib0 \
Line 32: Line 36:
  
 Reassemble the VNFS and reimage nodes. Now you can follow IpoIB instructions [[cluster:145|Infiniband]] Reassemble the VNFS and reimage nodes. Now you can follow IpoIB instructions [[cluster:145|Infiniband]]
 +
 +Then add these lines to ~test/.bashrc file and resubmit job.mpi and you'll notice we now run MPI over Infiniband.
  
 <code> <code>
  
-wwsh file import /opt/ohpc/pub/examples/network/centos/ifcfg-ib0.ww +# User specific aliases and functions 
-wwsh -y file set ifcfg-ib0.ww --path=/etc/sysconfig/network-scripts/ifcfg-ib0+module load gnu/5.4.0 
 +module load openmpi/1.10.4 
 +module load prun/1.1 
 +module list
  
-</code>+# job.102.out 
 +/opt/ohpc/pub/prun/1.1/prun 
 +[prun] Master compute host = n29 
 +[prun] Resource manager = slurm 
 +[prun] Launch cmd = mpirun ./a.out
  
 + Hello, world (8 procs total)
 +    --> Process #   0 of   8 is alive. -> n29.localdomain
 +    --> Process #   1 of   8 is alive. -> n29.localdomain
 +    --> Process #   2 of   8 is alive. -> n29.localdomain
 +    --> Process #   3 of   8 is alive. -> n29.localdomain
 +    --> Process #   4 of   8 is alive. -> n31.localdomain
 +    --> Process #   5 of   8 is alive. -> n31.localdomain
 +    --> Process #   6 of   8 is alive. -> n31.localdomain
 +    --> Process #   7 of   8 is alive. -> n31.localdomain
 +
 +</code>
  
  
Line 51: Line 75:
 [[cluster:154|OpenHPC page 1]] - [[cluster:155|OpenHPC page 2]] - [[cluster:156|OpenHPC page 3]] - page 4 [[cluster:154|OpenHPC page 1]] - [[cluster:155|OpenHPC page 2]] - [[cluster:156|OpenHPC page 3]] - page 4
  
 + --- //[[hmeij@wesleyan.edu|Henk]] 2017/05/30 11:03//
 +
 +**POSTFIX NOTE**
 +
 +In order to have users be able to send email from the jobs (from inside their jobs), like job progress reports, install mailx and post fix on the nodes. in ''/etc/postfix/main.cf'' define a relayhost (like sms-eth0-private) and on SMS define a relayhost like internal-mail.domain.edu. On the nodes you will need to remove the /usr/sbin/sendmail link and create a new link to /usr/sbin/sendmail.postfix.
 + 
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/160.txt · Last modified: 2017/05/31 11:07 by hmeij07