This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:184 [2019/09/03 14:25] hmeij07 [2019 GPU Expansion] |
cluster:184 [2019/09/07 14:08] hmeij07 [Summary] |
||
---|---|---|---|
Line 11: | Line 11: | ||
* [[cluster: | * [[cluster: | ||
- | ^ | + | ^ |
- | ^ Quote | + | |
| Nodes | | | | | | | | | U | | | Nodes | | | | | | | | | U | | ||
| Cpus | | | | | | | | | | | | Cpus | | | | | | | | | | | ||
Line 22: | Line 21: | ||
| Tflops | | Tflops | ||
- | Exxactcorp: For the GPU discussion, 2 to 4 GPUs per node is fine. T4 GPU is 100% fine , and the passive heatsink is better not worse. The system needs to be one that supports passive Tesla cards and the chassis fans would simply ramp to cool the card properly, as in any passive tesla situation. | + | **Exxactcorp**: For the GPU discussion, 2 to 4 GPUs per node is fine. T4 GPU is 100% fine , and the passive heatsink is better not worse. The system needs to be one that supports passive Tesla cards and the chassis fans would simply ramp to cool the card properly, as in any passive tesla situation. |
+ | |||
+ | **Microway**: | ||
+ | |||
+ | **ConRes**: In regards to the question around active vs. passive cooling on GPUs, the T4, V100, and other passively cooled GPUs are intended for 100% utilization and actually can offer better cooling and higher density in a system than active GPU models. | ||
- | Microway: For this mix of workloads two to four GPUs per node is a good balance. Passive GPUs are *better* for HPC usage. All Tesla GPUs for the last 5? years have been passive. I'd be happy to help allay any concerns you may have there. The short version is that the GPU and the server platform communicate as to the GPUs temperature. The server adjusts fan speeds appropriately and is able to move far more air than a built-in fan would ever be able to. | ||
==== Summary ==== | ==== Summary ==== | ||
Line 35: | Line 37: | ||
We are embarking on expanding our GPU compute capacity. To that end we tested some of the new GPU models. During a recent users group meeting the desire was also expressed to enable our option to enter the deep learning (DL) field in the near future. We do not anticipate to run Gaussian on these GPUs so are flexible in the mixed precision mode models. The list of software, with rough usage estimates and precision modes, is; amber (single, 25%), lammps (mixed, 20%), gromacs (mixed, 50%) and python bio-sequencing models (mixed or double, < 5%). | We are embarking on expanding our GPU compute capacity. To that end we tested some of the new GPU models. During a recent users group meeting the desire was also expressed to enable our option to enter the deep learning (DL) field in the near future. We do not anticipate to run Gaussian on these GPUs so are flexible in the mixed precision mode models. The list of software, with rough usage estimates and precision modes, is; amber (single, 25%), lammps (mixed, 20%), gromacs (mixed, 50%) and python bio-sequencing models (mixed or double, < 5%). | ||
| | ||
+ | |||
We anticipate the best solution to be 2-4 GPUs per node and not an ultra dense setup. | We anticipate the best solution to be 2-4 GPUs per node and not an ultra dense setup. | ||
| | ||
+ | |||
We do not have a proven imaging functionality with CentOS7, Warewulf and UEFI booting so all nodes should be imaged. Software to install is latest versions of amber (Wes to provide proof of purchase), lammps (with packages yes-rigid, yes-gpu, yes-colloid, | We do not have a proven imaging functionality with CentOS7, Warewulf and UEFI booting so all nodes should be imaged. Software to install is latest versions of amber (Wes to provide proof of purchase), lammps (with packages yes-rigid, yes-gpu, yes-colloid, | ||
- | DL software list: Pytorch, Caffe, Tensorflow. | + | |
- | Wes to install and configure scheduler client and queue. | + | DL software list: Pytorch, Caffe, Tensorflow. \\ |
- | Wes to provide two gigabit ethernet switches. | + | Wes to install and configure scheduler client and queue.\\ |
+ | Wes to provide two gigabit ethernet switches.\\ | ||
| | ||
+ | |||
Compute nodes should have 2 ethernet ports, single power ok but prefer redundant, dual CPUs with optimized memory configuration around 96-128 Gb. Start IP address ranges; nic1 192.168.102.89, | Compute nodes should have 2 ethernet ports, single power ok but prefer redundant, dual CPUs with optimized memory configuration around 96-128 Gb. Start IP address ranges; nic1 192.168.102.89, | ||
| | ||
+ | |||
Wes will provide 208V powered rack with 7K BTU cooling AC. Standard U42 rack (rails at 30", up to 37" usable). We also have plenty of shelves to simply hold the servers if needed. Rack contains two PDUs (24A) supplying 2x30 C13 outlets. [[https:// | Wes will provide 208V powered rack with 7K BTU cooling AC. Standard U42 rack (rails at 30", up to 37" usable). We also have plenty of shelves to simply hold the servers if needed. Rack contains two PDUs (24A) supplying 2x30 C13 outlets. [[https:// | ||