This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:209 [2021/11/04 19:40] hmeij07 [Astropy] |
cluster:209 [2022/04/08 13:47] hmeij07 [OHPC] |
||
---|---|---|---|
Line 11: | Line 11: | ||
* https:// | * https:// | ||
* https:// | * https:// | ||
+ | |||
+ | **NOTE** this eats up lots of disk space with multiple compiler and tool chains as you will see, hence installed in / | ||
< | < | ||
Line 127: | Line 129: | ||
# example easybuild installation of software | # example easybuild installation of software | ||
- | eb Bowtie2-2.4.2-GCC-9.3.0.eb --dry-run --robot --prefix=/share/apps/ | + | eb Bowtie2-2.4.2-GCC-9.3.0.eb --dry-run --robot --prefix=/sanscratch/ |
# do not forget prefix else it goes to $HOME/ | # do not forget prefix else it goes to $HOME/ | ||
# once done | # once done | ||
- | module use /share/apps/ | + | module use /sanscratch/ |
# use ' | # use ' | ||
Line 147: | Line 149: | ||
[hmeij@greentail52 ~]$ which bowtie2 | [hmeij@greentail52 ~]$ which bowtie2 | ||
- | /share/apps/ | + | /sanscratch/ |
</ | </ | ||
Line 157: | Line 159: | ||
# this required libibverbs and libibverbs-devel so not sure it will run on compute nodes | # this required libibverbs and libibverbs-devel so not sure it will run on compute nodes | ||
- | eb astropy-4.2.1-intelcuda-2020b.eb | + | eb astropy-4.2.1-intelcuda-2020b.eb |
# then ran into icc license problems, 2020b, check out license failed | # then ran into icc license problems, 2020b, check out license failed | ||
Line 163: | Line 165: | ||
# hint add eula flag when trying i/intel toolchain | # hint add eula flag when trying i/intel toolchain | ||
eb intel-2021a.eb | eb intel-2021a.eb | ||
- | --prefix=/share/apps/ | + | --prefix=/sanscratch/ |
--accept-eula-for=Intel-oneAPI, | --accept-eula-for=Intel-oneAPI, | ||
# built intel-compilers/ | # built intel-compilers/ | ||
Line 225: | Line 227: | ||
12) hwloc/ | 12) hwloc/ | ||
- | # in our environment | + | # in our environment |
+ | export LD_LIBRARY_PATH=/ | ||
[hmeij@greentail52 ~]$ which python nvcc mpirun | [hmeij@greentail52 ~]$ which python nvcc mpirun | ||
Line 254: | Line 257: | ||
Helios GPU tutorial\\ | Helios GPU tutorial\\ | ||
https:// | https:// | ||
+ | < | ||
+ | # pycuda not working, why would fosscuda? | ||
+ | ImportError: | ||
+ | undefined symbol: cuDevicePrimaryCtxRelease_v2 | ||
+ | </ | ||
+ | https:// | ||
===== Emcee ===== | ===== Emcee ===== | ||
Line 281: | Line 290: | ||
Flexible but weird. | Flexible but weird. | ||
+ | |||
+ | =====PyCUDA ===== | ||
+ | |||
+ | < | ||
+ | |||
+ | [hmeij@greentail52 ~]$ module load PyCUDA/ | ||
+ | [hmeij@greentail52 ~]$ module list | ||
+ | |||
+ | Currently Loaded Modules: | ||
+ | 1) GCCcore/ | ||
+ | 2) zlib/ | ||
+ | 3) binutils/ | ||
+ | 4) GCC/ | ||
+ | 5) CUDAcore/ | ||
+ | 6) CUDA/ | ||
+ | 7) gcccuda/ | ||
+ | 8) numactl/ | ||
+ | 9) XZ/ | ||
+ | 10) libxml2/ | ||
+ | 11) libpciaccess/ | ||
+ | 12) hwloc/ | ||
+ | 13) libevent/ | ||
+ | 14) Check/ | ||
+ | 15) GDRCopy/ | ||
+ | 16) UCX/ | ||
+ | 17) libfabric/ | ||
+ | 18) PMIx/ | ||
+ | 19) OpenMPI/ | ||
+ | |||
+ | # same error as pycuda inside of astrpy module | ||
+ | ImportError: | ||
+ | |||
+ | </ | ||
===== Another way ===== | ===== Another way ===== | ||
Line 337: | Line 379: | ||
... | ... | ||
+ | |||
+ | ===== OHPC ===== | ||
+ | |||
+ | Load module, search for application, | ||
+ | |||
+ | Then remove the dry-run flag. | ||
+ | |||
+ | < | ||
+ | |||
+ | | ||
+ | which eb | ||
+ | |||
+ | eb --search PyCUDA | ||
+ | find / | ||
+ | -name PyCUDA* | ||
+ | |||
+ | # dry-run | ||
+ | eb \ | ||
+ | / | ||
+ | --dry-run --robot --prefix=/ | ||
+ | |||
+ | |||
+ | </ | ||