User Tools

Site Tools


cluster:181

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
cluster:181 [2019/08/02 14:53]
hmeij07 [2019 GPU Models]
cluster:181 [2019/08/06 16:28]
hmeij07
Line 34: Line 34:
       * very interesting peak performance FP32 gpu chart (RTX TITAN and RTX 6000 on top)       * very interesting peak performance FP32 gpu chart (RTX TITAN and RTX 6000 on top)
     * [[https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#framework|Training Guide for Mixed Precision]]     * [[https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#framework|Training Guide for Mixed Precision]]
 +
 +From Lammps developer: "omputing forces in all single precision is a significant approximation and mostly works ok in homogeneous system, where there is a lot of error cancellation.
 +Using half precision in any form for force computations is not advisable."
  
 Keep track of this; does Amber run on the T4, the web site lists "Turing (SM_75) based cards require CUDA 9.2 or later." but does not list the T4 (too new?). Keep track of this; does Amber run on the T4, the web site lists "Turing (SM_75) based cards require CUDA 9.2 or later." but does not list the T4 (too new?).
cluster/181.txt ยท Last modified: 2019/08/13 12:15 by hmeij07