User Tools

Site Tools


cluster:187

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:187 [2019/12/10 13:26]
hmeij07
cluster:187 [2020/08/17 08:01] (current)
hmeij07
Line 1: Line 1:
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
 +
 +Slurm links:
 +
 +  * https://slurm.schedmd.com/SLUG19/NVIDIA_Containers.pdf
 +  * https://devblogs.nvidia.com/how-to-run-ngc-deep-learning-containers-with-singularity/
 +  * https://devblogs.nvidia.com/automating-downloads-ngc-container-replicator/
 +  * https://devblogs.nvidia.com/docker-compatibility-singularity-hpc/
 +
 +Other useful links.
 +
 +  * https://www.nvidia.com/en-us/gpu-cloud/containers/
 +  * https://docs.nvidia.com/ngc/ngc-user-guide/index.html
 +    *  scheduler wrapper, inside container: NV_GPU=2,3 nvidia-docker run ...
 +    *  (container sees host gpu 2,3 as container gpu 0,1)
 +  * https://ngc.nvidia.com/catalog/containers/
 +  * https://blog.exxactcorp.com/installing-using-docker-nv-docker-centos-7/
 +  * https://github.com/nvidia/nvidia-container-runtime#nvidia_visible_devices 
 +    *   nvidia? or cuda_visible...
  
  
 ==== NGC Docker Containers ==== ==== NGC Docker Containers ====
  
-Trying to understand how to leverage GPU ready applications on the Nvidia NGC web site (Nvidia GPU Cloud). Download docker containers and buidl your own on premise catalog. Can't wrap myself around the problem of how to integrate containers with the Openlava scheduler.+Trying to understand how to leverage GPU ready applications on the Nvidia NGC web site (Nvidia GPU Cloud). Download docker containers and build your own on premise catalog. Run GPU ready software on compute nodes with docker containers. Can't wrap myself around the problem of how to integrate containers with the our scheduler yet.
  
   * https://blog.exxactcorp.com/installing-using-docker-nv-docker-centos-7/   * https://blog.exxactcorp.com/installing-using-docker-nv-docker-centos-7/
Line 11: Line 29:
 <code> <code>
  
-Assumes CentOS +get docker on centos 
-# Assumes NVIDIA Driver is installed as per requirements ( < 340.29 ) +curl -fsSL https://get.docker.com/ -o get-docker.sh 
-# Install DOCKER +./getdocker.sh 
-sudo curl -fsSL https://get.docker.com/ sh + 
-Start DOCKER +systemctl 
-sudo systemctl start docker +systemctl enable docker 
-Add dockeruser, usermod change +systemctl start docker 
-sudo adduser dockeruser+ 
 +# dockeruser, usermod change 
 +adduser dockeruser
 usermod -aG docker dockeruser usermod -aG docker dockeruser
-# Install NV-DOCKER + 
-GET NVIDIA-DOCKER +get nvidia-docker 
-wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm +wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm 
-# INSTALL +wget https://github.com/NVIDIA/nvidia-docker/archive/v2.2.2.tar.gz 
-sudo rpm -i /tmp/nvidia-docker*.rpm + 
-Start NV-DOCKER Service+rpm -i /tmp/nvidia-docker*.rpm 
 +make nvidia-docker 
 + 
 +# systemctl 
 +systemctl enable nvidia-docker
 systemctl start nvidia-docker systemctl start nvidia-docker
  
Line 37: Line 61:
  
 # or  # or 
 +
 docker pull nvidia/cuda docker pull nvidia/cuda
  
Line 48: Line 73:
  
 NGC Deep Learning Ready Docker Containers: NGC Deep Learning Ready Docker Containers:
 +
 NVIDIA DIGITS - nvcr.io/nvidia/digits NVIDIA DIGITS - nvcr.io/nvidia/digits
 TensorFlow - nvcr.io/nvidia/tensorflow TensorFlow - nvcr.io/nvidia/tensorflow
Line 59: Line 85:
  
 # in the catalog you can also find  # in the catalog you can also find 
 +
 docker pull nvcr.io/hpc/gromacs:2018.2 docker pull nvcr.io/hpc/gromacs:2018.2
 docker pull nvcr.io/hpc/lammps:24Oct2018 docker pull nvcr.io/hpc/lammps:24Oct2018
 docker pull nvcr.io/hpc/namd:2.13-multinode docker pull nvcr.io/hpc/namd:2.13-multinode
 docker pull nvcr.io/partners/matlab:r2019b docker pull nvcr.io/partners/matlab:r2019b
-# not all at the latest versions+ 
 +# not all at the latest versions as you can see
 # and amber would have to be custom build on top of nvidia/cuda # and amber would have to be custom build on top of nvidia/cuda
  
Line 73: Line 101:
  
 # DIGITS example # DIGITS example
-# if you passed GPU ID 2,3 for example, the container would still see the GPUs as ID 0,1+# if you passed host GPU ID 2,3 the container would still see the GPUs as ID 0,1 
 NV_GPU=0,1 nvidia-docker run --name digits -d -p 5000:5000 nvidia/digits NV_GPU=0,1 nvidia-docker run --name digits -d -p 5000:5000 nvidia/digits
  
Line 80: Line 109:
  
 </code> </code>
 +
 +There are some other issues...
 +
 +  * inside the container the user invoked application runs as root so copying files back and forth is a problem
 +  * file systems, home directory and scratch spaces need to be mounted inside container
 +  * GPUs need to be reserved via scheduler on a host then made available to container (see above)
 +
 +Some notes from https://docs.nvidia.com/ngc/ngc-user-guide/index.html
 +
 +<code>
 +
 +# NGC containers are hosted in a repository called nvcr.io
 +# A Docker container is the running instance of a Docker image.
 +
 +# All NGC Container images are based on the CUDA platform layer (nvcr.io/nvidia/cuda)
 +
 +# mount host directory to container location
 +
 +-v $HOME:/tmp/$USER
 +
 +# pull images
 +
 +docker pull nvcr.io/hpc/namd:2.1
 +docker images
 +
 +# detailed information of container
 +
 +/workspace/README.md
 +
 +# specifying a user
 +
 +-u $(id -u):$(id -g)
 +
 +# allocate GPUs
 +
 +NV_GPU=0,1 nvidia-docker run ...
 +
 +# custom build images ...
 +# looks complex based on Dockerfile config file commands
 +# see link 
 +
 +</code>
 +
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
cluster/187.1576002394.txt.gz · Last modified: 2019/12/10 13:26 by hmeij07