User Tools

Site Tools


cluster:182

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
cluster:182 [2019/08/07 12:07]
hmeij07 created
cluster:182 [2019/08/12 14:26]
hmeij07 [the DPP]
Line 3: Line 3:
  
  
-==== P100 vs RTX & T4 ====+==== P100 vs RTX 6000 & T4 ==== 
 + 
 +The specifications of these GPU models are detailed at this page [[cluster:181|2019 GPU Models]] 
 + 
 +This page will mimic the work done on this page in 2018 [[cluster:175|P100 vs GTX & K20]] 
 + 
 +Credits: This work was made possible, in part, through HPC time donated by Microway, Inc. We gratefully acknowledge Microway for providing access to their GPU-accelerated compute cluster. 
 +[[http://www.microway.com|Microway]] 
 + 
 +First though...mixed precision calculations are on the rise, driven by Deep Learning.  Obviously the researcher needs to evaluate if veering away from double precision calculations is scientifically sound.  [[https://www.hpcwire.com/2019/08/05/llnl-purdue-researchers-harness-gpu-mixed-precision-for-accuracy-performance-tradeoff/|GPUmixer: harness gpu mixed precision]] 
 + 
 +==== the DPP ==== 
 + 
 +The Double Precision Problem. 
 + 
 +[[https://www.microway.com/knowledge-center-articles/comparison-of-nvidia-geforce-gpus-and-nvidia-tesla-gpus/|Comparison of Nvidia, GeForce GPUs and Nvidia Tesla GPUs]] 
 + 
 +[[https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units|List of Nvidia Graphics Processing Units]] 
 + 
 +"Every GPU with SM 1.3 (Tesla/GTX2xx) or better has hardware double-precision support. Starting with the Fermi architecture, Quadro and Tesla variants have better double-precision support than consumer Ge Force models." So I'm utterly confused by this outcome. The P100 is best at double precision (FP64), the RXT6000 is modest and the T4 actually has no specs regarding FP64. Nvidia does not publish any data on FP64 for T4 and certain RTX models. But running a colloid example in Lammps compiled for these GPUs with DOUBLE_DOUBLE, all three models obtain the same result in 500,000 loops.  
 + 
 +The explanation was found [[https://www.microway.com/hpc-tech-tips/nvidia-turing-tesla-t4-hpc-performance-benchmarks/|T4 benchmarks fp64 and fp32]].  The T4 can do double precision if needed but it's strength is mixed  and single precision. 
 + 
 +<code> 
 + 
 +p100-dd-1-1:Device 0: Tesla P100-PCIE-16GB, 56 CUs, 16/16 GB, 1.3 GHZ (Double Precision) 
 +p100-dd-1-1:  500000    1.9935932  0.097293139    2.0905319    1.0497421    22963.374 
 +p100-dd-1-1:Performance: 855254.719 tau/day, 1979.756 timesteps/
 + 
 +rtx-dd-1-1:Device 0: Quadro RTX 6000, 72 CUs, 23/24 GB, 2.1 GHZ (Double Precision) 
 +rtx-dd-1-1:  500000    1.9935932  0.097293139    2.0905319    1.0497421    22963.374 
 +rtx-dd-1-1:Performance: 600048.822 tau/day, 1389.002 timesteps/
 + 
 +t4-dd-1-1:Device 0: Tesla T4, 40 CUs, 15/15 GB, 1.6 GHZ (Double Precision) 
 +t4-dd-1-1:  500000    1.9935932  0.097293139    2.0905319    1.0497421    22963.374 
 +t4-dd-1-1:Performance: 518164.721 tau/day, 1199.455 timesteps/
 + 
 +</code>  
 + 
 + 
 +\\ 
 +**[[cluster:0|Back]]**
cluster/182.txt · Last modified: 2019/12/13 13:33 by hmeij07