User Tools

Site Tools


cluster:156

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:156 [2017/04/03 15:04]
hmeij07 [OpenHPC page 3]
cluster:156 [2017/04/05 10:42] (current)
hmeij07 [OpenHPC page 3]
Line 63: Line 63:
 </code> </code>
  
-** Perf Tools+** Perf Tools **
  
   * ''yum -y groupinstall ohpc-perf-tools-gnu''   * ''yum -y groupinstall ohpc-perf-tools-gnu''
Line 81: Line 81:
 yum -y groupinstall ohpc-runtimes-gnu yum -y groupinstall ohpc-runtimes-gnu
 # Install parallel libs for all available MPI toolchains # Install parallel libs for all available MPI toolchains
-[sms]# yum -y groupinstall ohpc-parallel-libs-gnu-mpich +yum -y groupinstall ohpc-parallel-libs-gnu-mpich 
-[sms]# yum -y groupinstall ohpc-parallel-libs-gnu-mvapich2 +yum -y groupinstall ohpc-parallel-libs-gnu-mvapich2 
-[sms]# yum -y groupinstall ohpc-parallel-libs-gnu-openmpi+yum -y groupinstall ohpc-parallel-libs-gnu-openmpi
  
 +# things like
 +# netcdf, hdf5, numpy and scipy for python, fftw, scalapack
  
 </code> </code>
  
 +Finish with installing Intel's Parallel Data Studio (icc/ifort).
 +
 +{{:cluster:install_guide-centos7.2-slurm-1.2-x86_64.pdf|install_guide-centos7.2-slurm-1.2-x86_64.pdf}}
 +
 +As user ''test''
 +
 +<code>
 +
 +module avail
 +module spider
 +which mpicc
 +module load gnu/5.4.0
 +module load openmpi/1.10.4
 +which gcc
 +which mpicc
 +mpicc -O3 /opt/ohpc/pub/examples/mpi/hello.c
 +cp /opt/ohpc/pub/examples/slurm/job.mpi .
 +which prun
 +find /opt/ohpc/pub -name prun
 +module spider prun
 +module load prun/1.1
 +which prun
 +sbatch job.mpi
 +squeue
 +
 +</code>
 +
 +You do need to install the Infiniband section so you can run over ethernet with OpenMPI
 +
 +<code>
 +
 +  yum  -y groupinstall "Infiniband Support"
 +  yum -y install infinipath-psm
 +  systemctl enable rdma
 +  systemctl start rdma
 +
 +# recipe is missing this: flavor openmpi
 +
 +  yum -y --installroot=/data/ohpc/images/centos7.2 install libibverbs opensm-libs infinipath-psm
 +
 +# remake vnfs
 +
 +</code>
 +
 +The following shows up when running MPI over ethernet
 +
 +<code>
 +
 +[prun] Master compute host = n29
 +[prun] Resource manager = slurm
 +[prun] Launch cmd = mpirun ./a.out
 +--------------------------------------------------------------------------
 +[[49978,1],4]: A high-performance Open MPI point-to-point messaging module
 +was unable to find any relevant network interfaces:
 +
 +Module: OpenFabrics (openib)
 +  Host: n31
 +
 +Another transport will be used instead, although this may result in
 +lower performance.
 +--------------------------------------------------------------------------
 +
 + Hello, world (8 procs total)
 +    --> Process #   2 of   8 is alive. -> n29.localdomain
 +    --> Process #   3 of   8 is alive. -> n29.localdomain
 +    --> Process #   0 of   8 is alive. -> n29.localdomain
 +    --> Process #   1 of   8 is alive. -> n29.localdomain
 +    --> Process #   6 of   8 is alive. -> n31.localdomain
 +    --> Process #   7 of   8 is alive. -> n31.localdomain
 +    --> Process #   4 of   8 is alive. -> n31.localdomain
 +    --> Process #   5 of   8 is alive. -> n31.localdomain
 +
 +</code>
 +See page 4 for ib0 configuration which is incomplete on the recipe.
 +
 +[[cluster:154|OpenHPC page 1]] - [[cluster:155|OpenHPC page 2]] - page 3 - [[cluster:160|OpenHPC page 4]]
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/156.1491246266.txt.gz · Last modified: 2017/04/03 15:04 by hmeij07