This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:181 [2019/07/31 12:46] hmeij07 |
cluster:181 [2019/08/07 14:30] hmeij07 [2019 GPU Models] |
||
---|---|---|---|
Line 4: | Line 4: | ||
===== 2019 GPU Models ===== | ===== 2019 GPU Models ===== | ||
- | We do not do AI (yet). | + | We do not do AI (yet). |
- | ^ | + | ^ |
- | ^ Model ^ RTX 2080 Ti ^ RTX TITAN ^ RTX 4000 ^ RTX 6000 ^ RTX 8000 ^ P100 ^ V100 ^ Notes ^ | + | ^ Model ^ RTX 2080 Ti ^ RTX TITAN ^ RTX 4000 ^ RTX 6000 ^ RTX 8000 ^ P100 ^ V100 |
- | | Cores | 4352 | 4608 | 2304 | 4608 | 4608 | 3584 | 5120 |parallel cuda| | + | | Cores | 4352 | 4608 | 2304 | 4608 | 4608 | 3584 | 5120 |
- | | Memory | + | | Memory |
- | | Watts | 250 | 280 | 250 | 295 | 295 | 250 | 250 | | | + | | Watts | 250 | 280 | 250 | 295 | 295 | 250 | 250 |
- | | Tflops | + | | Tflops |
- | | Tflops | + | | Tflops |
- | | Avg Bench | 197% | 215% | 120% | 207% | 219% | 120% | 150% |user bench reporting| | + | | Avg Bench | 197% | 215% | 120% | 207% | 219% | 120% | 150% |
- | | Price | $1, | + | | Price | $1, |
- | | $/ | + | | $/ |
- | | Notes | small scale | medium scale | small scale | medium scale | large scale | versatile but EOL | most advanced | + | | Notes | small scale | medium scale | small scale | medium scale | large scale | versatile but EOL | most advanced |
- | | FP64? | - | some | - | some | - | yes | yes |double | + | | FP64? | - | some | - | some | - | yes | yes |
A lot of information comes from this web site [[https:// | A lot of information comes from this web site [[https:// | ||
Line 26: | Line 26: | ||
This is a handy tool [[https:// | This is a handy tool [[https:// | ||
+ | |||
+ | Learn more about the T4 ... the T4 can run in mixed mode (fp32/fp16) and can deliver 65 Tflops. Other modes are INT8 at 130 Tops and INT4 260 Tops. Now at 65 Tflops mixed precision the cost dives to $34/tflop. Amazing. And the wattage is amazing too. | ||
+ | |||
+ | * [[https:// | ||
+ | * [[https:// | ||
+ | * [[http:// | ||
+ | * very interesting peak performance FP32 gpu chart (RTX TITAN and RTX 6000 on top) | ||
+ | * [[https:// | ||
+ | |||
+ | From Lammps developer: " | ||
+ | Using half precision in any form for force computations is not advisable." | ||
+ | |||
+ | **Keep track of these** | ||
+ | |||
+ | - does Amber run on the T4, the web site lists " | ||
+ | - Gaussian g16c01 AVX enabled linux binaries - no linda " | ||
+ | |||
+ | |||
\\ | \\ | ||
**[[cluster: | **[[cluster: |