This is an old revision of the document!
The specifications of these GPU models are detailed at this page 2019 GPU Models
This page will mimic the work done on this page in 2018 P100 vs GTX & K20
Credits: This work was made possible, in part, through HPC time donated by Microway, Inc. We gratefully acknowledge Microway for providing access to their GPU-accelerated compute cluster. Microway
First though…
The Double Precision Problem.
Comparison of Nvidia, GeForce GPUs and Nvidia Tesla GPUs
List of Nvidia Graphics Processing Units
“Every GPU with SM 1.3 (Tesla/GTX2xx) or better has hardware double-precision support. Starting with the Fermi architecture, Quadro and Tesla variants have better double-precision support than consumer Ge Force models.” So I'm utterly confused by this outcome. The P100 is best at double precision (fp64), the RXT6000 is modest and the T4 actually has no specs regarding fp64. Running a colloid example in Lammps compiled for these GPUs with DOUBLE_DOUBLE, all three models obtain the same result in 500,000 loops. Must have something to do with the tensor cores in the T4.
p100-dd-1-1:Device 0: Tesla P100-PCIE-16GB, 56 CUs, 16/16 GB, 1.3 GHZ (Double Precision) p100-dd-1-1: 500000 1.9935932 0.097293139 2.0905319 1.0497421 22963.374 p100-dd-1-1:Performance: 855254.719 tau/day, 1979.756 timesteps/s rtx-dd-1-1:Device 0: Quadro RTX 6000, 72 CUs, 23/24 GB, 2.1 GHZ (Double Precision) rtx-dd-1-1: 500000 1.9935932 0.097293139 2.0905319 1.0497421 22963.374 rtx-dd-1-1:Performance: 600048.822 tau/day, 1389.002 timesteps/s t4-dd-1-1:Device 0: Tesla T4, 40 CUs, 15/15 GB, 1.6 GHZ (Double Precision) t4-dd-1-1: 500000 1.9935932 0.097293139 2.0905319 1.0497421 22963.374 t4-dd-1-1:Performance: 518164.721 tau/day, 1199.455 timesteps/s