Differences
This shows you the differences between two versions of the page.
Both sides previous revision
Previous revision
|
Next revision
Both sides next revision
|
cluster:181 [2019/08/07 14:30] hmeij07 [2019 GPU Models] |
cluster:181 [2019/08/08 13:25] hmeij07 [2019 GPU Models] |
* [[https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#framework|Training Guide for Mixed Precision]] | * [[https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#framework|Training Guide for Mixed Precision]] |
| |
From Lammps developer: "Computing forces in all single precision is a significant approximation and mostly works ok in homogeneous system, where there is a lot of error cancellation. | From Lammps developer: "Computing forces in all single precision is a significant approximation and mostly works ok in homogeneous system, where there is a lot of error cancellation. Using half precision in any form for force computations is not advisable." |
Using half precision in any form for force computations is not advisable." | |
| From Gromacs web site: "GROMACS simulations are normally run in “mixed” floating-point precision, which is suited for the use of single precision in FFTW. The default FFTW package is normally in double precision." |
| |
**Keep track of these** | **Keep track of these** |