This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:156 [2017/04/03 17:57] hmeij07 |
cluster:156 [2017/04/05 14:42] (current) hmeij07 [OpenHPC page 3] |
||
---|---|---|---|
Line 2: | Line 2: | ||
**[[cluster: | **[[cluster: | ||
- | ==== OpenHPC | + | ==== OpenHPC |
- | | + | ** Tools ** |
- | | + | |
- | | + | |
- | Need to read " | + | < |
- | | + | yum -y groupinstall ohpc-autotools |
+ | yum -y install valgrind-ohpc | ||
+ | yum -y install EasyBuild-ohpc | ||
+ | yum -y install spack-ohpc | ||
+ | yum -y install R_base-ohpc | ||
+ | |||
+ | </ | ||
+ | |||
+ | * " | ||
+ | * http:// | ||
+ | * " | ||
+ | | ||
+ | * "Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments." | ||
+ | * http:// | ||
+ | * ''/ | ||
+ | * R_base contains '' | ||
+ | |||
+ | ** Compilers ** | ||
+ | |||
+ | < | ||
+ | |||
+ | yum install gnu-compilers-ohpc | ||
+ | |||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | **MPIs** | ||
+ | |||
+ | * Both for ethernet and infiniband networks | ||
+ | |||
+ | < | ||
+ | |||
+ | yum -y install openmpi-gnu-ohpc mvapich2-gnu-ohpc mpich-gnu-ohpc | ||
+ | |||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | |||
+ | |||
+ | </ | ||
+ | |||
+ | ** Perf Tools ** | ||
+ | |||
+ | * '' | ||
+ | * Appendix C | ||
+ | |||
+ | **3rd Party & Libs Tools** | ||
+ | |||
+ | * OpenHPC provides package aliases for these 3rd party libraries and utilities that can | ||
+ | be used to install available libraries for use with the GNU compiler family toolchain. | ||
+ | |||
+ | < | ||
+ | |||
+ | # Install libs for all available GNU compiler family toolchain | ||
+ | yum -y groupinstall ohpc-serial-libs-gnu | ||
+ | yum -y groupinstall ohpc-io-libs-gnu | ||
+ | yum -y groupinstall ohpc-python-libs-gnu | ||
+ | yum -y groupinstall ohpc-runtimes-gnu | ||
+ | # Install parallel libs for all available MPI toolchains | ||
+ | yum -y groupinstall ohpc-parallel-libs-gnu-mpich | ||
+ | yum -y groupinstall ohpc-parallel-libs-gnu-mvapich2 | ||
+ | yum -y groupinstall ohpc-parallel-libs-gnu-openmpi | ||
+ | |||
+ | # things like | ||
+ | # netcdf, hdf5, numpy and scipy for python, fftw, scalapack | ||
+ | |||
+ | </ | ||
+ | |||
+ | Finish with installing Intel' | ||
+ | |||
+ | {{: | ||
+ | |||
+ | As user '' | ||
+ | |||
+ | < | ||
+ | |||
+ | module avail | ||
+ | module spider | ||
+ | which mpicc | ||
+ | module load gnu/5.4.0 | ||
+ | module load openmpi/ | ||
+ | which gcc | ||
+ | which mpicc | ||
+ | mpicc -O3 / | ||
+ | cp / | ||
+ | which prun | ||
+ | find / | ||
+ | module spider prun | ||
+ | module load prun/1.1 | ||
+ | which prun | ||
+ | sbatch job.mpi | ||
+ | squeue | ||
+ | |||
+ | </ | ||
+ | |||
+ | You do need to install the Infiniband section so you can run over ethernet with OpenMPI | ||
+ | |||
+ | < | ||
+ | |||
+ | yum -y groupinstall " | ||
+ | yum -y install infinipath-psm | ||
+ | systemctl enable rdma | ||
+ | systemctl start rdma | ||
+ | |||
+ | # recipe is missing this: flavor openmpi | ||
+ | |||
+ | yum -y --installroot=/ | ||
+ | |||
+ | # remake vnfs | ||
+ | |||
+ | </ | ||
+ | |||
+ | The following shows up when running MPI over ethernet | ||
+ | |||
+ | < | ||
+ | |||
+ | [prun] Master compute host = n29 | ||
+ | [prun] Resource manager = slurm | ||
+ | [prun] Launch cmd = mpirun ./a.out | ||
+ | -------------------------------------------------------------------------- | ||
+ | [[49978, | ||
+ | was unable to find any relevant network interfaces: | ||
+ | |||
+ | Module: OpenFabrics (openib) | ||
+ | Host: n31 | ||
+ | |||
+ | Another transport will be used instead, although this may result in | ||
+ | lower performance. | ||
+ | -------------------------------------------------------------------------- | ||
+ | |||
+ | | ||
+ | --> Process # 2 of 8 is alive. -> n29.localdomain | ||
+ | --> Process # 3 of 8 is alive. -> n29.localdomain | ||
+ | --> Process # 0 of 8 is alive. -> n29.localdomain | ||
+ | --> Process # 1 of 8 is alive. -> n29.localdomain | ||
+ | --> Process # 6 of 8 is alive. -> n31.localdomain | ||
+ | --> Process # 7 of 8 is alive. -> n31.localdomain | ||
+ | --> Process # 4 of 8 is alive. -> n31.localdomain | ||
+ | --> Process # 5 of 8 is alive. -> n31.localdomain | ||
+ | |||
+ | </ | ||
+ | See page 4 for ib0 configuration which is incomplete on the recipe. | ||
+ | |||
+ | [[cluster: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |