Table of Contents


Back

Turing/Volta/Pascal

AWS deploys T4

Look at this, the smallest Elastic Cloud Compute Instances are g4dn.xlarge yielding access to 4 vCPUs, 16GiB memory and 1x T4 GPU. The largest is g4dn.16xlarge yielding access to 64 vCPUs 256 GiB memory and 1x T4 GPUs. Now the smallest is priced at $0.526/hr, and running that card 24/7 for a year is a cost of $4,607.76 … meaning … option #7 below with 26 GPUs would cost you a whopping $119,802. Annually! That's the low tide water mark.

The high tide water mark? The largest instance is priced at $4.352 and would cost you near one million dollars to run per year if you matched option #7.

Rival cloud vendor Google also offers Nvidia T4 GPUs in its cloud; Google announced global availability back in April. Google Cloud’s T4 GPU availability includes three regions each in the U.S. and Asia and one each in South America and Europe. That page mentions a price of “as low as $0.29 per hour per GPU” which translates to $66K per year matching option #7 below. Still. Insane.

2019 GPU Expansion

More focus…

​ With the Deep Learning Ready docker containers…NGC Docker Containers

The SUPER model quote above is what we selected
Henk 2020/01/03 08:22

Focus on RTX2080 model…

https://ngc.nvidia.com/catalog/all?orderBy=modifiedDESC&query=&quickFilter=all&filters=​ ​

Ok, we try this year. Here are some informational pages.

Criteria for selection (points of discussion raised at last meeting 08/27/2019):

  1. Continue with current work load, just more of it (RTX2080ti/RTX4000)
  2. Do above, and enable beginners level intro into Deep Learning (T4)
  3. Do above, but invest for future expansion into complex Deep Learning (RTX6000)

Pick your option and put it in the shopping cart 8-)
Table best read from the bottom up to assess differences.

Options Notes
#1 #2 #3 #4 #5 #6 #7 #8 #9 #10
rtx2080ti rtx6000 rtx2080ti t4 rtx6000 rtx4000 t4 rtx6000 t4 rtx4000
Nodes 6 4 9 7 5 17 13 8 8 6 total
Cpus 12 8 18 14 10 34 26 16 16 12 total
Cores 96 64 180 140 100 272 208 192 128 72 physical
Tflops 3.2 2.2 13.8 10.7 7.7 9.2 7 6.8 4.3 2.5 cpu dpfp
Gpus 48 16 36 28 20 34 26 16 28 60 total
Cores 209 74 157 72 92 75 67 74 72 138 cuda K
Cores 26 9 20 8.9 11.5 10 8 9 9 17 tensor K
Tflops 21 13 16 7 10 7.5 6.5 13 7 13 gpu dpfp
Tflops 682 261 511 227 326 241 211 261 227 426 gpu spfp
$/TFlop 138 348 188 423 295 402 466 361 433 232 gpu dp+sp
Per Node
Chassis 2U(12) 2U(8) 2U(18) 2U(14) 2U(10) 1U(17) 1U(13) 4U(32) 1U(8) 4U(24) rails?
CPU 2 2 2 2 2 2 2 2 2 2 total
4208 4208 5115 5115 5115 4208 4208 4214 4208 4208 model
silver silver gold gold gold silver silver silver silver silver type
2×8 2×8 2×10 2×10 2×10 2×8 2×8 2×12 2×8 2×8 physical
2.1 2.1 2.4 2.4 2.4 2.1 2.1 2.2 2.1 2.1 Ghz
85 85 85 85 85 85 85 85 85 85 Watts
DDR4 192 192 192 192 192 192 192 192 192 192 GB mem
2933 2933 2266 2666 2666 2666 2666 2933 2933 2666 Mhz
Drives 2×960 2×960 960 960 960 240 240 240 240 240 GB
2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 2.5 SSD/HDD
GPU 8 4 4 4 4 2 2 2 4 10 total
RTX RTX RTX T RTX RTX T RTX T RTX arch
2080ti 6000 2080ti 4 6000 4000 4 6000 4 4000 model
11 24 11 16 24 8 16 24 16 8 GB mem
250 295 250 70 295 160 70 295 70 160 Watts
Power 2200 1600 1600 1600 1600 1600 1600 2200 1600 2000 Watts
1+1 1+1 1+1 1+1 1+1 1+1 1+1 1+1 1+1 2+2 redundant
CentOS7 n+n n+n y+? y+? y+? y+y y+y y+y n+n n+n +cuda?
Nics 2 2 2 2 2 2 2 2 2 2 gigabit
Warranty 3 3 3 3 3 3 3 3 3 3 standard
-3 -6 -1 -1 -5.5 0 +1.6 0 +1.5 -1 Δ

On the question of active versus passive cooling:

Exxactcorp: For the GPU discussion, 2 to 4 GPUs per node is fine. T4 GPU is 100% fine , and the passive heatsink is better not worse. The system needs to be one that supports passive Tesla cards and the chassis fans would simply ramp to cool the card properly, as in any passive tesla situation. Titan RTX GPUs is what you should be worried about, and I would be hesitant to quote them. They are *NOT GOOD* for multi GPU systems.

Microway: For this mix of workloads two to four GPUs per node is a good balance. Passive GPUs are *better* for HPC usage. All Tesla GPUs for the last 5? years have been passive. I'd be happy to help allay any concerns you may have there. The short version is that the GPU and the server platform communicate as to the GPUs temperature. The server adjusts fan speeds appropriately and is able to move far more air than a built-in fan would ever be able to.

ConRes: In regards to the question around active vs. passive cooling on GPUs, the T4, V100, and other passively cooled GPUs are intended for 100% utilization and actually can offer better cooling and higher density in a system than active GPU models.

Summary

Lets amend this request [to vendors]. I realize with 100k we might be only talking about 4-6 boxes. I can handle all the software. Would be nice if all nodes did have centos7+cuda installed. Not required but let me know.
Henk 2019/09/03 08:55

We are embarking on expanding our GPU compute capacity. To that end we tested some of the new GPU models. During a recent users group meeting the desire was also expressed to enable our option to enter the deep learning (DL) field in the near future. We do not anticipate to run Gaussian on these GPUs so are flexible in the mixed precision mode models. The list of software, with rough usage estimates and precision modes, is; amber (single, 25%), lammps (mixed, 20%), gromacs (mixed, 50%) and python bio-sequencing models (mixed or double, < 5%). ​

We anticipate the best solution to be 2-4 GPUs per node and not an ultra dense setup. Job usage pattern is mostly one job per GPU with exclusive access to allocated GPU, albeit that pattern may change based on GPU memory footprint. We were zooming in on the RTX 6000 or TITAN GPU models but are open to suggestions. The T4 looks intriguing but the passive heat sink bothers us (does that work under near constant 100% utilization rates?).​ ​

We do not have a proven imaging functionality with CentOS7, Warewulf and UEFI booting so all nodes should be imaged. Software to install is latest versions of amber (Wes to provide proof of purchase), lammps (with packages yes-rigid, yes-gpu, yes-colloid, yes-class2, yes-kspace, yes-misc, yes-molecule), gromacs (with -DGMX_BUILD_OWN_FFTW=ON). All MPI enabled with OpenMPI. Latest Nvidia CUDA drivers. Some details if you need them at this web page: https://dokuwiki.wesleyan.edu/doku.php?id=cluster:172

DL software list: Pytorch, Caffe, Tensorflow. ​
Wes to install and configure scheduler client and queue.​
Wes to provide two gigabit ethernet switches.​

Compute nodes should have 2 ethernet ports, single power ok but prefer redundant, dual CPUs with optimized memory configuration around 96-128 Gb. Start IP address ranges; nic1 192.168.102.89, nic2 10.10.102.89, ipmi 192.168.103.89, netmask 255.255.0.0 for all.​ ​

Wes will provide 208V powered rack with 7K BTU cooling AC. Standard U42 rack (rails at 30”, up to 37“ usable). We also have plenty of shelves to simply hold the servers if needed. Rack contains two PDUs (24A) supplying 2×30 C13 outlets.​ External Link


Back