This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:166 [2018/06/21 13:31] hmeij07 |
cluster:166 [2018/06/27 11:51] (current) hmeij07 [Notes] |
||
---|---|---|---|
Line 39: | Line 39: | ||
* Fall back option to v2.2 (definitely free of infringement, | * Fall back option to v2.2 (definitely free of infringement, | ||
* Move forward option, adopt SLURM (LBL developers, major disruption) | * Move forward option, adopt SLURM (LBL developers, major disruption) | ||
+ | |||
+ | * If we adopt SLURM should we transition to OpenHPC Warewulf/ | ||
+ | * http:// | ||
+ | * new login node and couple compute nodes to start? | ||
* New HPC Advisory Group Member | * New HPC Advisory Group Member | ||
Line 44: | Line 48: | ||
* Tidbits | * Tidbits | ||
* Bought deep U42 rack with AC cooling onboard and two PDUs | * Bought deep U42 rack with AC cooling onboard and two PDUs | ||
- | * Pushed Angstrom rack (bss24) out of our area, ready to recycle that | + | * Pushed Angstrom rack (bss24) out of our area, ready to recycle that (Done. 06/20/2018) |
* Currently we have two U42 racks empty with power | * Currently we have two U42 racks empty with power | ||
* Cooling needs to be provided with any new major purchases (provost, ITS, HPC?) | * Cooling needs to be provided with any new major purchases (provost, ITS, HPC?) | ||
Line 54: | Line 58: | ||
* All Infiniband ports are in use | * All Infiniband ports are in use | ||
+ | ===== Notes ===== | ||
+ | |||
+ | * First make a page comparing CPU vs GPU usage which may influence future purchase [[cluster: | ||
+ | * $100k quote, 3to5 vendors, data points mid-2018 | ||
+ | * One node (or all) should have configured on it: amber, gromacs, laamps, namd, latest version | ||
+ | * Nvidia latest version, optimal configs cpu:gpu ratios | ||
+ | * Amber 1:1 (may be 1:2 in future releases) - amber certified GPU! | ||
+ | * Gromacs 10:1 (could ramp up to claiming all resources per node) | ||
+ | * Namd 13:1 (could ramp up to claiming all resources per node) | ||
+ | * Lammps 2-4:1 | ||
+ | * 128g with enough CPU slots to take over '' | ||
+ | * Anticipated target (also to manage heat exchange) | ||
+ | * 2x10 Xeon CPU (~100gb left) with 2xgtx1080ti GPU (25gb memory required) | ||
+ | * as many as fit budget but but no more than 15 rack wise | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |