User Tools

Site Tools


cluster:181

This is an old revision of the document!



Back

2019 GPU Models

We do not do AI (yet). The pattern is mostly one job per GPU for exclusive access. So no NVlink requirements, CPI connections sufficient. The application list is Amber, Gromacs, Lammps and some python biosequencing packages. Our current per GPU memory footprint is 8 GB which seems sufficient.

Quadro Tesla Turing
Model RTX 2080 Ti RTX TITAN RTX 4000 RTX 6000 RTX 8000 P100 V100 T4 Notes
Cores 4352 4608 2304 4608 4608 3584 5120 2560 parallel cuda
Memory 11 24 8 24 46 12 32 16 GB ddr6
Watts 250 280 250 295 295 250 250 70!
Tflops - 0.5 - 0.5 - 4.7 7 - double fp64
Tflops 13.5 16 7 16 16 9.3 14 8.1 single fp32
Avg Bench 197% 215% 120% 207% 219% 120% 150% ?? user bench reporting
Price $1,199 $2,499 $900 $4,000 $5,500 $4,250 $9,538 ?? list price
$/fp32 $89 $156 $129 $250 $344 $457 $681 ??
Notes small scale medium scale small scale medium scale large scale versatile but EOL most advanced supercharge
FP64? - some - some - yes yes - double fp64

A lot of information comes from this web site Best GPU for deep learning

Bench statistics (Nidia GTX 1070 is about 100% baseline) from this web site External Link

Most GPU models come in multiple memory configurations, showing the most common footprints.

This is a handy tool GPU Server Catalog

Learn more about the T4


Back

cluster/181.1564579313.txt.gz · Last modified: 2019/07/31 09:21 by hmeij07