User Tools

Site Tools


cluster:156

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:156 [2017/04/03 14:50]
hmeij07 [OpenHPC Tools]
cluster:156 [2017/04/05 10:42] (current)
hmeij07 [OpenHPC page 3]
Line 62: Line 62:
  
 </code> </code>
 +
 +** Perf Tools **
 +
 +  * ''yum -y groupinstall ohpc-perf-tools-gnu''
 +  * Appendix C
 +
 +**3rd Party & Libs Tools**
 +
 +  * OpenHPC provides package aliases for these 3rd party libraries and utilities that can
 +be used to install available libraries for use with the GNU compiler family toolchain.
 +
 +<code>
 +
 +# Install libs for all available GNU compiler family toolchain
 +yum -y groupinstall ohpc-serial-libs-gnu
 +yum -y groupinstall ohpc-io-libs-gnu
 +yum -y groupinstall ohpc-python-libs-gnu
 +yum -y groupinstall ohpc-runtimes-gnu
 +# Install parallel libs for all available MPI toolchains
 +yum -y groupinstall ohpc-parallel-libs-gnu-mpich
 +yum -y groupinstall ohpc-parallel-libs-gnu-mvapich2
 +yum -y groupinstall ohpc-parallel-libs-gnu-openmpi
 +
 +# things like
 +# netcdf, hdf5, numpy and scipy for python, fftw, scalapack
 +
 +</code>
 +
 +Finish with installing Intel's Parallel Data Studio (icc/ifort).
 +
 +{{:cluster:install_guide-centos7.2-slurm-1.2-x86_64.pdf|install_guide-centos7.2-slurm-1.2-x86_64.pdf}}
 +
 +As user ''test''
 +
 +<code>
 +
 +module avail
 +module spider
 +which mpicc
 +module load gnu/5.4.0
 +module load openmpi/1.10.4
 +which gcc
 +which mpicc
 +mpicc -O3 /opt/ohpc/pub/examples/mpi/hello.c
 +cp /opt/ohpc/pub/examples/slurm/job.mpi .
 +which prun
 +find /opt/ohpc/pub -name prun
 +module spider prun
 +module load prun/1.1
 +which prun
 +sbatch job.mpi
 +squeue
 +
 +</code>
 +
 +You do need to install the Infiniband section so you can run over ethernet with OpenMPI
 +
 +<code>
 +
 +  yum  -y groupinstall "Infiniband Support"
 +  yum -y install infinipath-psm
 +  systemctl enable rdma
 +  systemctl start rdma
 +
 +# recipe is missing this: flavor openmpi
 +
 +  yum -y --installroot=/data/ohpc/images/centos7.2 install libibverbs opensm-libs infinipath-psm
 +
 +# remake vnfs
 +
 +</code>
 +
 +The following shows up when running MPI over ethernet
 +
 +<code>
 +
 +[prun] Master compute host = n29
 +[prun] Resource manager = slurm
 +[prun] Launch cmd = mpirun ./a.out
 +--------------------------------------------------------------------------
 +[[49978,1],4]: A high-performance Open MPI point-to-point messaging module
 +was unable to find any relevant network interfaces:
 +
 +Module: OpenFabrics (openib)
 +  Host: n31
 +
 +Another transport will be used instead, although this may result in
 +lower performance.
 +--------------------------------------------------------------------------
 +
 + Hello, world (8 procs total)
 +    --> Process #   2 of   8 is alive. -> n29.localdomain
 +    --> Process #   3 of   8 is alive. -> n29.localdomain
 +    --> Process #   0 of   8 is alive. -> n29.localdomain
 +    --> Process #   1 of   8 is alive. -> n29.localdomain
 +    --> Process #   6 of   8 is alive. -> n31.localdomain
 +    --> Process #   7 of   8 is alive. -> n31.localdomain
 +    --> Process #   4 of   8 is alive. -> n31.localdomain
 +    --> Process #   5 of   8 is alive. -> n31.localdomain
 +
 +</code>
 +See page 4 for ib0 configuration which is incomplete on the recipe.
 +
 +[[cluster:154|OpenHPC page 1]] - [[cluster:155|OpenHPC page 2]] - page 3 - [[cluster:160|OpenHPC page 4]]
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/156.1491245420.txt.gz ยท Last modified: 2017/04/03 14:50 by hmeij07