User Tools

Site Tools


cluster:156


Back

OpenHPC page 3

Tools

yum -y groupinstall ohpc-autotools
yum -y install valgrind-ohpc
yum -y install EasyBuild-ohpc
yum -y install spack-ohpc
yum -y install R_base-ohpc
  • “Valgrind is an instrumentation framework for building dynamic analysis tools. There are Valgrind tools that can automatically detect many memory management and threading bugs” (memcheck)
  • “Welcome to the documentation of EasyBuild, a software build and installation framework that allows you to manage (scientific) software on High Performance Computing (HPC) systems in an efficient way.”
  • “Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments.”
  • R_base contains R and Rscript

Compilers

yum install gnu-compilers-ohpc

/opt/ohpc/pub/compiler/gcc/5.4.0/bin/c++
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/cpp
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/g++
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/gcc
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/gcc-ar
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/gcc-nm
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/gcc-ranlib
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/gcov
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/gcov-tool
/opt/ohpc/pub/compiler/gcc/5.4.0/bin/gfortran

MPIs

  • Both for ethernet and infiniband networks
yum -y install openmpi-gnu-ohpc mvapich2-gnu-ohpc mpich-gnu-ohpc

/opt/ohpc/pub/mpi/openmpi-gnu/1.10.4/bin/mpicc
/opt/ohpc/pub/mpi/openmpi-gnu/1.10.4/bin/mpirun
/opt/ohpc/pub/mpi/mvapich2-gnu/2.2/bin/mpicc
/opt/ohpc/pub/mpi/mvapich2-gnu/2.2/bin/mpirun
/opt/ohpc/pub/mpi/mpich-gnu-ohpc/3.2/bin/mpicc
/opt/ohpc/pub/mpi/mpich-gnu-ohpc/3.2/bin/mpirun

Perf Tools

  • yum -y groupinstall ohpc-perf-tools-gnu
  • Appendix C

3rd Party & Libs Tools

  • OpenHPC provides package aliases for these 3rd party libraries and utilities that can

be used to install available libraries for use with the GNU compiler family toolchain.

# Install libs for all available GNU compiler family toolchain
yum -y groupinstall ohpc-serial-libs-gnu
yum -y groupinstall ohpc-io-libs-gnu
yum -y groupinstall ohpc-python-libs-gnu
yum -y groupinstall ohpc-runtimes-gnu
# Install parallel libs for all available MPI toolchains
yum -y groupinstall ohpc-parallel-libs-gnu-mpich
yum -y groupinstall ohpc-parallel-libs-gnu-mvapich2
yum -y groupinstall ohpc-parallel-libs-gnu-openmpi

# things like
# netcdf, hdf5, numpy and scipy for python, fftw, scalapack

Finish with installing Intel's Parallel Data Studio (icc/ifort).

install_guide-centos7.2-slurm-1.2-x86_64.pdf

As user test

module avail
module spider
which mpicc
module load gnu/5.4.0
module load openmpi/1.10.4
which gcc
which mpicc
mpicc -O3 /opt/ohpc/pub/examples/mpi/hello.c
cp /opt/ohpc/pub/examples/slurm/job.mpi .
which prun
find /opt/ohpc/pub -name prun
module spider prun
module load prun/1.1
which prun
sbatch job.mpi
squeue

You do need to install the Infiniband section so you can run over ethernet with OpenMPI

  yum  -y groupinstall "Infiniband Support"
  yum -y install infinipath-psm
  systemctl enable rdma
  systemctl start rdma

# recipe is missing this: flavor openmpi

  yum -y --installroot=/data/ohpc/images/centos7.2 install libibverbs opensm-libs infinipath-psm

# remake vnfs

The following shows up when running MPI over ethernet

[prun] Master compute host = n29
[prun] Resource manager = slurm
[prun] Launch cmd = mpirun ./a.out
--------------------------------------------------------------------------
[[49978,1],4]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: n31

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------

 Hello, world (8 procs total)
    --> Process #   2 of   8 is alive. -> n29.localdomain
    --> Process #   3 of   8 is alive. -> n29.localdomain
    --> Process #   0 of   8 is alive. -> n29.localdomain
    --> Process #   1 of   8 is alive. -> n29.localdomain
    --> Process #   6 of   8 is alive. -> n31.localdomain
    --> Process #   7 of   8 is alive. -> n31.localdomain
    --> Process #   4 of   8 is alive. -> n31.localdomain
    --> Process #   5 of   8 is alive. -> n31.localdomain

See page 4 for ib0 configuration which is incomplete on the recipe.

OpenHPC page 1 - OpenHPC page 2 - page 3 - OpenHPC page 4


Back

cluster/156.txt · Last modified: 2017/04/05 14:42 by hmeij07