This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:181 [2019/07/30 17:29] hmeij07 [2019 GPU Models] |
cluster:181 [2019/08/12 14:16] hmeij07 |
||
---|---|---|---|
Line 3: | Line 3: | ||
===== 2019 GPU Models ===== | ===== 2019 GPU Models ===== | ||
- | ^ | + | |
- | ^ Model ^ RTX 2080 Ti ^ RTX TITAN ^ RTX 4000 ^ RTX 6000 ^ RTX 8000 ^ P100 ^ V100 ^ Notes ^ | + | We do not do AI (yet). |
- | | Cores | 4352 | 4608 | 2304 | 4608 | 4608 | 3584 | 5120 |parallel cuda| | + | |
- | | Memory | + | ^ |
- | | Watts | 250 | 280 | 250 | 295 | 295 | 250 | 250 | | | + | ^ Model ^ RTX 2080 Ti ^ RTX TITAN ^ RTX 4000 ^ RTX 6000 ^ RTX 8000 ^ P100 ^ V100 |
- | | Tflops | + | | Cores | 4352 | 4608 | 2304 | 4608 | 4608 | 3584 | 5120 |
- | | Tflops | + | | Memory |
- | | Avg Bench | 197% | 215% | 120% | 207% |219% | 120% | 150% |user bench reporting| | + | | Watts | 250 | 280 | 250 | 295 | 295 | 250 | 250 |
- | | Price | $1, | + | | Tflops |
- | | $/ | + | | Tflops |
- | | Notes | small scale | medium scale, some fp64 | + | | Avg Bench | 197% | 215% | 120% | 207% | 219% | 120% | 150% |
+ | | Price | $1, | ||
+ | | $/ | ||
+ | | Notes | small scale | medium scale | small scale | medium scale | large scale | versatile | ||
+ | | FP64? | - | some | - | some | - | yes | yes | - |double precision| | ||
A lot of information comes from this web site [[https:// | A lot of information comes from this web site [[https:// | ||
- | Bench statistics (Nidia GTX 1070 is about 100% baseline) from this web site [[https:// | + | Bench statistics (Nvidia |
Most GPU models come in multiple memory configurations, | Most GPU models come in multiple memory configurations, | ||
+ | |||
+ | This is a handy tool [[https:// | ||
+ | |||
+ | Learn more about the T4 ... the T4 can run in mixed mode (fp32/fp16) and can deliver 65 Tflops. Other modes are INT8 at 130 Tops and INT4 260 Tops. Now at 65 Tflops mixed precision the cost dives to $34/tflop. Amazing. And the wattage is amazing too. See the next page for the fp64/fp32 mixed precision mode quandary...[[cluster: | ||
+ | |||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * [[http:// | ||
+ | * very interesting peak performance FP32 gpu chart (RTX TITAN and RTX 6000 on top) | ||
+ | * [[https:// | ||
+ | |||
+ | From Lammps developer: " | ||
+ | |||
+ | From Gromacs web site: " | ||
+ | |||
+ | |||
+ | **Keep track of these** | ||
+ | |||
+ | - does Amber run on the T4, the web site lists " | ||
+ | - Gaussian g16c01 AVX enabled linux binaries - no linda " | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |