User Tools

Site Tools


cluster:214

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:214 [2022/06/23 18:28]
hmeij07 [Amber20]
cluster:214 [2023/08/18 16:19] (current)
hmeij07 [Upgrading]
Line 397: Line 397:
 ==== Upgrading ==== ==== Upgrading ====
  
-Figure out an upgrade process before going production (don't forget any chroot images and rebuild images).+Figure out an upgrade process before going production
 + 
 +  * **Do you actually want to upgrade OpenHPC?** 
 +    * v2.6 deploys ww4.x (maybe not want this, containers) 
 +    * chroot images and rebuild images running rocky 8 
 +    * OneAPI similar conflicts? (/opt/intel and /opt/ohpc/pub) 
 +    * slurm complications? 
 +  * **Upgrade Openhpc, OneAPI should be on new head node** 
 +    * test compatibility compilers 
 +    * slurm clients
  
 <code> <code>
Line 403: Line 412:
 yum upgrade "*-ohpc" yum upgrade "*-ohpc"
 yum upgrade "ohpc-base" yum upgrade "ohpc-base"
 +
 +or
 +
 +yum update --disablerepo=* --enablerepo=[oneAPI,OpenHPC]
  
 </code> </code>
  
 +**Upgrade history**
  
 +  * OS only, 30 Jun 2022 (90+ days up) - no ohpc, oneapi (/opt)
 +  * OS only, 18 Aug 2023 (440+ days up) - no ohpc, oneapi (/opt)
 +  * 
 ==== example modules ==== ==== example modules ====
  
Line 460: Line 477:
 Amber cmake download fails with READLINE error ... package readline-devel needs to be installed to get past that which pulls in   ncurses-c++-libs-6.1-9.20180224.el8.x86_64   ncurses-devel-6.1-9.20180224.el8.x86_64   readline-devel-7.0-10.el8.x86_64   Amber cmake download fails with READLINE error ... package readline-devel needs to be installed to get past that which pulls in   ncurses-c++-libs-6.1-9.20180224.el8.x86_64   ncurses-devel-6.1-9.20180224.el8.x86_64   readline-devel-7.0-10.el8.x86_64  
  
-** Example script run.rocky for cpu or gpu run** (not for queues mwgpu, exx96)+** Example script run.rocky for cpu or gpu run** (for queues amber128 [n78] and test [n100-n101] for gpus and mw128 and tinymem for cpus)
  
 <code> <code>
Line 531: Line 548:
 </code> </code>
  
-** Example script run.centos for cpui or gpu run** (queues mwgpu, exx96)+** Example script run.centos for cpus or gpu run** (queues mwgpu, exx96)
  
 <code> <code>
Line 639: Line 656:
  
 </code> </code>
 +
 +==== Amber22 ====
 +
 +Amber22 is somehow incompatible with CentOS/Rocky openmpi (yum install). Hence the latest version of openmpi was compiled and installed into $AMBERHOME. No need to set PATHs, just be sure to source amber.sh in your script. (compile instructions below for me...)
 +
 +https://ambermd.org/InstCentOS.php\\
 +"download a recent version of OpenMPI at open-mpi.org, untar the distribution in amber22_src/AmberTools/src, and execute in that directory the configure_openmpi script. (Do this after you have done a serial install, and have sourced the amber.sh script in the installation folder to create an AMBERHOME)"
 +
 +<code>
 +
 +[hmeij@n79 src]$ echo $AMBERHOME
 +/share/apps/CENTOS7/amber/amber22
 +
 +[hmeij@n79 src]$ which mpirun mpicc
 +/share/apps/CENTOS7/amber/amber22/bin/mpirun
 +/share/apps/CENTOS7/amber/amber22/bin/mpicc
 +
 +</code>
 +
 +First establish a successful run with the **run.rocky** script for Amber20 (listed above). Then change the module in your script. (for queues amber128 [n78] and test [n100-n101] for gpus and mw128 and tinymem for cpus)
 +
 +<code>
 +
 +module load amber/22
 +
 +# if the module does not show up in the output of your console
 +
 +module avail
 +
 +# treat your module cache as out of date
 +
 +module --ignore_cache avail
 +
 +</code>
 +
 +First establish a success full run with the **run.centos** script for Amber20 (listed above, for cpus or gpus on queues mwgpu and exx96). 
 +
 +Then edit the  script and apply these edits. We had to use a specific compatible ''gcc/g++'' version to make this work. Hardware is getting too old.
 +
 +<code>
 +
 +# comment out the 2 export lines pointing to openmpi
 +##export PATH=/share/apps/CENTOS7/openmpi/4.0.4/bin:$PATH
 +##export LD_LIBRARY_PATH=/share/apps/CENTOS7/openmpi/4.0.4/lib:$LD_LIBRARY_PATH
 +
 +# additional gcc 6.5.0
 +export PATH=/share/apps/CENTOS7/gcc/6.5.0/bin:$PATH
 +export LD_LIBRARY_PATH=/share/apps/CENTOS7/gcc/6.5.0/lib64:$LD_LIBRARY_PATH
 +
 +# edit or add correct source line, which and ldd lines just for debugging
 +###source /usr/local/amber16/amber.sh # works on mwgpu
 +###source /usr/local/amber20/amber.sh # works on exx96
 +source /share/apps/CENTOS7/amber/amber22/amber.sh # works on mwgpu and exx96
 +which nvcc mpirun python
 +ldd `which pmemd.cuda_SPFP`
 +
 +</code>
 +
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/214.1656008896.txt.gz · Last modified: 2022/06/23 18:28 by hmeij07