User Tools

Site Tools


cluster:181

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
cluster:181 [2019/07/31 14:25]
hmeij07 [2019 GPU Models]
cluster:181 [2019/07/31 17:02]
hmeij07
Line 4: Line 4:
 ===== 2019 GPU Models ===== ===== 2019 GPU Models =====
  
-We do not do AI (yet).  The pattern is mostly one job per GPU for exclusive access.  So no NVlink requirements, CPI connections sufficient.  The application list is Amber, Gromacs, Lammps and some python biosequencing packages. Our current per GPU memory footprint is 8 GB which seems sufficient.+We do not do AI (yet).  The GPU usage pattern is mostly one job per GPU for exclusive access.  So no NVlink requirements, CPI connections sufficient.  The application list is Amber, Gromacs, Lammps and some python biosequencing packages. Our current per GPU memory footprint is 8 GB which seems sufficient.
  
 ^          Quadro  ^^^^^  Tesla  ^^  Turing  ^    ^ ^          Quadro  ^^^^^  Tesla  ^^  Turing  ^    ^
Line 10: Line 10:
 |  Cores  |  4352  |  4608  |  2304  |  4608  |  4608  |  3584  |  5120  |  2560  |parallel cuda| |  Cores  |  4352  |  4608  |  2304  |  4608  |  4608  |  3584  |  5120  |  2560  |parallel cuda|
 | Memory  |  11  |  24  |  8  |  24  |  46  |  12  |  32  |  16  |GB ddr6| | Memory  |  11  |  24  |  8  |  24  |  46  |  12  |  32  |  16  |GB ddr6|
-|  Watts  |  250  |  280  |  250  |  295  |  295  |  250  |  250  |  70!  |    |+|  Watts  |  250  |  280  |  250  |  295  |  295  |  250  |  250  |  70 !  |    |
 |  Tflops  |  -  |  0.5  |  -  |  0.5  |  -  |  4.7  |  7  |  -  |double fp64| |  Tflops  |  -  |  0.5  |  -  |  0.5  |  -  |  4.7  |  7  |  -  |double fp64|
 |  Tflops  |  13.5  |  16  |  7  |  16  |  16  |  9.3  |  14  |  8.1  |single fp32| |  Tflops  |  13.5  |  16  |  7  |  16  |  16  |  9.3  |  14  |  8.1  |single fp32|
Line 27: Line 27:
 This is a handy tool [[https://www.nvidia.com/en-us/data-center/tesla/tesla-qualified-servers-catalog/|GPU Server Catalog]] This is a handy tool [[https://www.nvidia.com/en-us/data-center/tesla/tesla-qualified-servers-catalog/|GPU Server Catalog]]
  
-Learn more about the T4 ... the T4 can run in mixed mode (fp32/fp16) and deliver 65 Tflops. Other modes are INT8 at 130 Tops and INT4 260 Tops.+Learn more about the T4 ... the T4 can run in mixed mode (fp32/fp16) and deliver 65 Tflops. Other modes are INT8 at 130 Tops and INT4 260 Tops. Now at 65 Tflops mixed precision the cost dives to $34/tflop. Amazing. And the wattage is amazing too.
  
   * [[https://www.nvidia.com/en-us/data-center/tesla-t4/|T4]]   * [[https://www.nvidia.com/en-us/data-center/tesla-t4/|T4]]
cluster/181.txt ยท Last modified: 2019/08/13 12:15 by hmeij07