User Tools

Site Tools


cluster:184

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:184 [2019/09/03 14:25]
hmeij07 [2019 GPU Expansion]
cluster:184 [2019/09/07 14:06]
hmeij07
Line 11: Line 11:
   * [[cluster:182|P100 vs RTX 6000 & T4]] page   * [[cluster:182|P100 vs RTX 6000 & T4]] page
  
-^  Vendor  ^  A  ^^  B  ^^  C  ^^  D  ^^  Notes  ^ +^  Option   #1  ^  #2  ^  #  #  #  #  #  #    |
-^  Quote   #1  ^  #2  ^  #  #  #  #  #  #    |+
 |  Nodes  |    |    |    |    |    |    |    |    |  U  | |  Nodes  |    |    |    |    |    |    |    |    |  U  |
 |  Cpus  |    |    |    |    |    |    |    |    |    | |  Cpus  |    |    |    |    |    |    |    |    |    |
Line 22: Line 21:
 |  Tflops  |    |    |    |    |    |    |    |    |  gpu spfp  | |  Tflops  |    |    |    |    |    |    |    |    |  gpu spfp  |
  
-Exxactcorp: For the GPU discussion, 2 to 4 GPUs per node is fine. T4 GPU is 100% fine , and the passive heatsink is better not worse. The system needs to be one that supports passive Tesla cards and the chassis fans would simply ramp to cool the card properly, as in any passive tesla situation.  Titan RTX GPUs is what you should be worried about, and I would be hesitant to quote them. They are *NOT GOOD* for multi GPU systems.+**Exxactcorp**: For the GPU discussion, 2 to 4 GPUs per node is fine. T4 GPU is 100% fine , and the passive heatsink is better not worse. The system needs to be one that supports passive Tesla cards and the chassis fans would simply ramp to cool the card properly, as in any passive tesla situation.  Titan RTX GPUs is what you should be worried about, and I would be hesitant to quote them. They are *NOT GOOD* for multi GPU systems
 + 
 +**Microway**: For this mix of workloads two to four GPUs per node is a good balance. Passive GPUs are *better* for HPC usage. All Tesla GPUs for the last 5? years have been passive. I'd be happy to help allay any concerns you may have there. The short version is that the GPU and the server platform communicate as to the GPUs temperature. The server adjusts fan speeds appropriately and is able to move far more air than a built-in fan would ever be able to. 
 + 
 +**ConRes**: In regards to the question around active vs. passive cooling on GPUs, the T4, V100, and other passively cooled GPUs are intended for 100% utilization and actually can offer better cooling and higher density in a system than active GPU models.
  
-Microway: For this mix of workloads two to four GPUs per node is a good balance. Passive GPUs are *better* for HPC usage. All Tesla GPUs for the last 5? years have been passive. I'd be happy to help allay any concerns you may have there. The short version is that the GPU and the server platform communicate as to the GPUs temperature. The server adjusts fan speeds appropriately and is able to move far more air than a built-in fan would ever be able to. 
  
 ==== Summary ==== ==== Summary ====
cluster/184.txt · Last modified: 2020/01/03 13:22 by hmeij07