Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:184 [DokuWiki]

User Tools

Site Tools


cluster:184

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:184 [2019/09/07 10:06]
hmeij07
cluster:184 [2020/01/03 08:22]
hmeij07
Line 2: Line 2:
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
  
 +==== Turing/Volta/Pascal ====
 +
 +  * https://graphicscardhub.com/turing-vs-volta-v-pascal/
 +
 +==== AWS deploys T4 ====
 +
 +  * https://www.hpcwire.com/2019/09/20/aws-makes-t4-gpu-instances-broadly-available-for-inferencing-graphics/
 +
 +Look at this, the smallest Elastic Cloud Compute Instances are **g4dn.xlarge** yielding access to 4 vCPUs, 16GiB memory and 1x T4 GPU. The largest is **g4dn.16xlarge** yielding access to 64 vCPUs 256 GiB memory and 1x T4 GPUs. Now the smallest is priced at $0.526/hr, and running that card 24/7 for a year is a cost of $4,607.76 ... meaning ... option #7 below with 26 GPUs would cost you a whopping $119,802. Annually! That's the low tide water mark. 
 +
 +The high tide water mark? The largest instance is priced at $4.352 and would cost you near one million dollars to run per year if you matched option #7.
 +
 +Rival cloud vendor Google also offers Nvidia T4 GPUs in its cloud; Google announced global availability back in April. Google Cloud’s T4 GPU availability includes three regions each in the U.S. and Asia and one each in South America and Europe. That page mentions a price of "as low as $0.29 per hour per GPU" which translates to $66K per year matching option #7 below. Still. Insane.
 +
 +  * https://www.hpcwire.com/2019/04/30/google-cloud-goes-global-with-nvidia-t4-gpus/
  
 ==== 2019 GPU Expansion ==== ==== 2019 GPU Expansion ====
 +
 +More focus...
 +
 +  * Vendor A: 
 +    * Option 1: 48 gpus, 12 nodes, 24U, each: two 4214 12-core cpus (silver), 96 gb ram, 1tb SSD, four NVIDIA RTX 2080 SUPER 8GB GPU,  centos7 yes, cuda yes, 3 yr, 2x gbe nics, 17.2w 31.5d 3.46h"​ (fits)​
 +
 +With the Deep Learning Ready docker containers...[[cluster:187|NGC Docker Containers]]
 +
 +The SUPER model quote above is what we selected\\
 + --- //[[hmeij@wesleyan.edu|Henk]] 2020/01/03 08:22//
 +
 +
 +Focus on RTX2080 model...
 +
 +  * Vendor A: ​
 +    * Option 1: 48 gpus, 12 nodes, 24U, each: two 4116 12-core cpus (silver), 96 gb ram, 1tb SSD, four rtx2080 gpus (8gb),  centos7 yes, cuda yes, 3 yr, nics?, wxdxh"?
 +    * Option 2: 40 gpus, 10 nodes, 20U, each: two 4116 12-core cpus (silver), 96 gb ram, 1tb SSD, four rtx2080ti gpus (11gb),  centos7 yes, cuda yes, 3 yr, nics?, wxdxh"?
 +    * A1+A2 installed, configured and tested: NGC Docker containers Deep Learning Software Stack: NVIDIA DIGITS, TensorFlow, Caffe, NVIDIA CUDA, PyTorch, RapidsAI, Portainer ... NGC Catalog can be found  at​
 +https://ngc.nvidia.com/catalog/all?orderBy=modifiedDESC&query=&quickFilter=all&filters=​
 +
 +  * Vendor B:​
 +    * Option 1: 36 gpus, 9 nodes, 18U, each: two 4214 12-core cpus (silver), 96 gb ram, 2x960gb SATA, four rtx2080tifsta gpus (11gb),  centos7 no, cuda no, 3 yr, 2xgbe nics, wxdxh"?
 +
 +  * Vendor C:​
 +    * Option 1: 40 gpus, 10 nodes, 40U, each: two 4214 12-core cpus (silver), 96 gb ram, 240 gb SSD, four rtx2080ti gpus (11gb),  centos7 yes, cuda yes, 3 yr, 2xgbe nics, 18.2x26.5x7"
 +    * Option 2: 48 gpus, 12 nodes, 48U, each: two 4214 12-core cpus (silver), 96 gb ram, 240 gb SSD, four rtx2080s gpus (8gb),  centos7 yes, cuda yes, 3 yr, 2xgbe nics, 18.2x26.5x7"
 +
 +  * Vendor D:​
 +    * Option 1: 48 gpus, 12 nodes, 12U, each: two 4214 12-core cpus (silver), 64 gb ram, 2x480gb SATA, four rtx2080s gpus (8gb),  centos7 yes, cuda yes, 3 yr, 2xgbe nics, 17.2x35.2x1.7"
  
 Ok, we try this year. Here are some informational pages. Ok, we try this year. Here are some informational pages.
  
 +  * [[cluster:168|2018 GPU Expansion]] page
   * [[cluster:175|P100 vs GTX & K20]] page   * [[cluster:175|P100 vs GTX & K20]] page
   * [[cluster:181|2019 GPU Models]] page   * [[cluster:181|2019 GPU Models]] page
   * [[cluster:182|P100 vs RTX 6000 & T4]] page   * [[cluster:182|P100 vs RTX 6000 & T4]] page
  
-^  Option   #1  ^  #2  ^  #3  ^  #4  ^  #5  ^  #6  ^  #7  ^  #8  ^    | + 
-|  Nodes  |                         |   | +  * All GPU cards able to do single and double precision (fp64/fp32), "mixed mode" 
-|  Cpus  |                            +  * Tensor cores are 4 single precision cores able to return double precision results 
-|  Cores  |                         |  physical +  * GPU cards performance on double precision depends on the quantity of tensors 
-|  Gpus  |                            +  * CPU model/type determines dpfp/cycle; silver 16, gold 32. 
-|  Cores  |                         |  cuda  | + 
-|  Cores  |                         |  tensor +Criteria for selection (points of discussion raised at last meeting 08/27/2019): 
-|  Tflops          |    |    |    |    |    |  cpu dpfp  | +  - Continue with current work load, just more of it (RTX2080ti/RTX4000) 
-|  Tflops     |    |    |    |    |    |    |    |  gpu spfp  |+  - Do above, and enable beginners level intro into Deep Learning (T4) 
 +  - Do above, but invest for future expansion into complex Deep Learning (RTX6000) 
 + 
 +//**Pick your option and put it in the shopping cart**//  8-)\\ 
 +Table best read from the bottom up to assess differences. 
 + 
 +^  Options  ^^^^^^^^^^^  Notes  ^ 
 +^    ^  #1  ^  #2  ^  #3  ^  #4  ^  #5  ^  #6  ^  #7  ^  #8   #9  ^  #10     
 +^    ^  rtx2080ti  ^  rtx6000  ^  rtx2080ti  ^  t4  ^  rtx6000  ^  rtx4000  ^  t4  ^  rtx6000  ^  t4  ^  rtx4000  ^    ^ 
 +|  Nodes  |  6   4   9   7   5   17   13   8  |   |  6  | total
 +|  Cpus  |  12   8   18   14   10   34   26   16   16  |  12  | total
 +|  Cores  |  96   64   180   140   100   272   208   192  |  128  |  72  | physical
 + Tflops  |  3.2  |  2.2  |  13.8  |  10.7  |  7.7  |  9.2  |  7  |  6.8  |  4.3  |  2.5  | cpu dpfp
 +|  Gpus  |  48   16   36   28   20   34   26   16   28  |  60  | total
 +|  Cores  |  209   74   157   72   92   75   67   74  |  72  |  138  | cuda K
 +|  Cores  |  26   9   20   8.9   11.5   10   8   9  |  9  |  17  | tensor K| 
 + Tflops  |  21  |  13  |  16  |  7  |  10  |  7.5  |  6.5  |  13  |  7  |  13  | gpu dpfp
 +|  Tflops  682   261   511  |  227  |  326  |  241  |  211  |  261  |  227  |  426  | gpu spfp| 
 +|  $/TFlop  |  138  |  348  |  188  |  423  |  295  |  402  |  466  |  361  |  433  |  232  | gpu dp+sp| 
 +^ Per Node  ^^^^^^^^^^^^ 
 +|  Chassis  |  2U(12)  |  2U(8)  |  2U(18)  |  2U(14)  |  2U(10)  |  1U(17)  |  1U(13)  |  4U(32)  |   1U(8) |  4U(24)  | rails?| 
 +|  CPU  |  2  |  2  |  2  |  2  |  2  |  2  |  2  |  2  |  2  |  2  | total| 
 +|    |  4208  |  4208  |  5115  |  5115  |  5115  |  4208  |  4208  |  4214  |  4208  |  4208  | model| 
 +|    |  silver  |  silver  |  gold  |  gold  |  gold  |  silver  |  silver  |  silver  |  silver  |  silver  | type| 
 +|    |  2x8  |  2x8  |  2x10  |  2x10  |  2x10  |  2x8  |  2x8  |  2x12  |  2x8  |  2x8  | physical| 
 +|    |  2.1  |  2.1  |  2.4  |  2.4  |  2.4  |  2.1  |  2.1  |  2.2  |  2.1  |  2.1  | Ghz| 
 +|    |  85  |  85  |  85  |  85  |  85  |  85  |  85  |  85  |  85  |  85  | Watts
 +|  DDR4  |  192  |  192  |  192  |  192  |  192  |  192  |  192  |  192  |  192  |  192  | GB mem| 
 +|    |  2933  |  2933  |  2266  |  2666  |  2666  |  2666  |  2666  |  2933  |  2933  |  2666  | Mhz| 
 +|  Drives  |  2x960  |  2x960  |  960  |  960  |  960  |  240  |  240  |  240  |  240  |  240  | GB| 
 +|    |  2.5  |  2.5  |  2.5  |  2.5  |  2.5  |  2.5  |  2.5  |  2.5  |  2.5  |  2.5  | SSD/HDD| 
 +|  GPU  |  8  |  4  |  4  |  4  |  4  |  2  |  2  |  2  |  4  |  10  | total| 
 +|    |  RTX  |  RTX  |  RTX  |  T  |  RTX  |  RTX  |  T  |  RTX  |  T  |  RTX  | arch| 
 +|    |  2080ti  |  6000  |  2080ti  |  4  |  6000  |  4000  |  4  |  6000  |  4  |  4000  | model| 
 +|    |  11  |  24  |  11  |  16  |  24  |  8  |  16  |  24  |  16  |  8  | GB mem| 
 +|    |  250  |  295  |  250  |  70  |  295  |  160  |  70  |  295  |  70  |  160  | Watts| 
 +|  Power  |  2200  |  1600  |  1600  |  1600  |  1600  |  1600  |  1600  |  2200  |  1600  |  2000  | Watts| 
 +|    |  1+1  |  1+1  |  1+1  |  1+1  |  1+1  |  1+1  |  1+1  |  1+1  |  1+1  |  2+2  | redundant| 
 +|  CentOS7  |  n+n  |  n+n  |  y+?  |  y+?  |  y+?  |  y+y  |  y+y  |  y+y  |  n+n  |  n+n  | +cuda?| 
 +|  Nics  |  2  |  2  |  2  |  2  |  2  |  2  |  2  |  2  |  2  |  2  | gigabit| 
 +|  Warranty  |  3  |  3  |  3  |  3  |  3  |  3  |  3  |  3  |  3  |  3  | standard| 
 +|    |  -3   -6  |  -1  |  -1  |  -5.5  |  0  |  +1.6  |  0  |  +1.5  |  -1  |  Δ  | 
 + 
 +  * #1/#2 All GPU warranty requests will be filled by GPU maker. 
 +  * #7 up to 4 GPUs per node. Filling rack leaving 1U open between nodes, count=15 
 +  * #8 fills intended rack with AC in rack. GPU Tower/4U rack mount. 
 +  * #8 includes NVLink connector (bridge kit). Up to 4 GPUs per node. 
 +  * Tariffs may affect all quotes when executed. 
 +  * S&H included (or estimated) 
 +  * More than 4-6 nodes would be lots of work if Warewulf/CentOS7 imaging is not working. 
 + 
 +On the question of active versus passive cooling: 
  
 **Exxactcorp**: For the GPU discussion, 2 to 4 GPUs per node is fine. T4 GPU is 100% fine , and the passive heatsink is better not worse. The system needs to be one that supports passive Tesla cards and the chassis fans would simply ramp to cool the card properly, as in any passive tesla situation.  Titan RTX GPUs is what you should be worried about, and I would be hesitant to quote them. They are *NOT GOOD* for multi GPU systems. **Exxactcorp**: For the GPU discussion, 2 to 4 GPUs per node is fine. T4 GPU is 100% fine , and the passive heatsink is better not worse. The system needs to be one that supports passive Tesla cards and the chassis fans would simply ramp to cool the card properly, as in any passive tesla situation.  Titan RTX GPUs is what you should be worried about, and I would be hesitant to quote them. They are *NOT GOOD* for multi GPU systems.
Line 37: Line 134:
 We are embarking on expanding our GPU compute capacity. To that end we tested some of the new GPU models. During a recent users group meeting the desire was also expressed to enable our option to enter the deep learning (DL) field in the near future. We do not anticipate to run Gaussian on these GPUs so are flexible in the mixed precision mode models. The list of software, with rough usage estimates and precision modes, is; amber (single, 25%), lammps (mixed, 20%), gromacs (mixed, 50%) and python bio-sequencing models (mixed or double, < 5%).  We are embarking on expanding our GPU compute capacity. To that end we tested some of the new GPU models. During a recent users group meeting the desire was also expressed to enable our option to enter the deep learning (DL) field in the near future. We do not anticipate to run Gaussian on these GPUs so are flexible in the mixed precision mode models. The list of software, with rough usage estimates and precision modes, is; amber (single, 25%), lammps (mixed, 20%), gromacs (mixed, 50%) and python bio-sequencing models (mixed or double, < 5%). 
  
 +
 We anticipate the best solution to be 2-4 GPUs per node and not an ultra dense setup.  Job usage pattern is mostly one job per GPU with exclusive access to allocated GPU, albeit that pattern may change based on GPU memory footprint. We were zooming in on the RTX 6000 or TITAN GPU models but are open to suggestions. The T4 looks intriguing but the passive heat sink bothers us (does that work under near constant 100% utilization rates?).​ We anticipate the best solution to be 2-4 GPUs per node and not an ultra dense setup.  Job usage pattern is mostly one job per GPU with exclusive access to allocated GPU, albeit that pattern may change based on GPU memory footprint. We were zooming in on the RTX 6000 or TITAN GPU models but are open to suggestions. The T4 looks intriguing but the passive heat sink bothers us (does that work under near constant 100% utilization rates?).​
  
 +
 We do not have a proven imaging functionality with CentOS7, Warewulf and UEFI booting so all nodes should be imaged. Software to install is latest versions of amber (Wes to provide proof of purchase), lammps (with packages yes-rigid, yes-gpu, yes-colloid, yes-class2, yes-kspace, yes-misc, yes-molecule), gromacs (with -DGMX_BUILD_OWN_FFTW=ON). All MPI enabled with OpenMPI. Latest Nvidia CUDA drivers. Some details if you need them at this web page: https://dokuwiki.wesleyan.edu/doku.php?id=cluster:172  We do not have a proven imaging functionality with CentOS7, Warewulf and UEFI booting so all nodes should be imaged. Software to install is latest versions of amber (Wes to provide proof of purchase), lammps (with packages yes-rigid, yes-gpu, yes-colloid, yes-class2, yes-kspace, yes-misc, yes-molecule), gromacs (with -DGMX_BUILD_OWN_FFTW=ON). All MPI enabled with OpenMPI. Latest Nvidia CUDA drivers. Some details if you need them at this web page: https://dokuwiki.wesleyan.edu/doku.php?id=cluster:172 
  
-DL software list: Pytorch, Caffe, Tensorflow. ​ + 
-Wes to install and configure scheduler client and queue.​ +DL software list: Pytorch, Caffe, Tensorflow. ​\\ 
-Wes to provide two gigabit ethernet switches.​+Wes to install and configure scheduler client and queue.​\\ 
 +Wes to provide two gigabit ethernet switches.​\\
  
 +
 Compute nodes should have 2 ethernet ports, single power ok but prefer redundant, dual CPUs with optimized memory configuration around 96-128 Gb. Start IP address ranges; nic1 192.168.102.89, nic2 10.10.102.89, ipmi 192.168.103.89, netmask 255.255.0.0 for all.​ Compute nodes should have 2 ethernet ports, single power ok but prefer redundant, dual CPUs with optimized memory configuration around 96-128 Gb. Start IP address ranges; nic1 192.168.102.89, nic2 10.10.102.89, ipmi 192.168.103.89, netmask 255.255.0.0 for all.​
  
 +
 Wes will provide 208V powered rack with 7K BTU cooling AC. Standard U42 rack (rails at 30", up to 37" usable). We also have plenty of shelves to simply hold the servers if needed. Rack contains two PDUs (24A) supplying 2x30 C13 outlets.​ [[https://www.rackmountsolutions.net/rackmount-solutions-cruxial-cool-42u-7kbtu-air-conditioned-server-cabinet/​|External Link]] Wes will provide 208V powered rack with 7K BTU cooling AC. Standard U42 rack (rails at 30", up to 37" usable). We also have plenty of shelves to simply hold the servers if needed. Rack contains two PDUs (24A) supplying 2x30 C13 outlets.​ [[https://www.rackmountsolutions.net/rackmount-solutions-cruxial-cool-42u-7kbtu-air-conditioned-server-cabinet/​|External Link]]
  
cluster/184.txt · Last modified: 2020/01/03 08:22 by hmeij07