This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:184 [2019/09/12 12:19] hmeij07 |
cluster:184 [2019/09/27 12:38] hmeij07 [AWS deploys T4] |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
+ | ==== AWS deploys T4 ==== | ||
+ | |||
+ | * https:// | ||
+ | |||
+ | Look at this, the smallest Elastic Cloud Compute Instances are **g4dn.xlarge** yielding access to 4 vCPUs and 1x T4 GPU. The largest is **g4dn.16xlarge** yielding access to 64 vCPUs and 1x T4 GPUs. Now the smallest is priced at $0.526/hr, and running that card 24/7 for a year is a cost of $4,607.76 ... meaning ... option #7 below with 26 GPUs would cost you a whopping $119,802. Annually! That's the low tide water mark. | ||
+ | |||
+ | The high tide water mark? The largest instance is priced at $4.352 and would cost you near one million dollars to run per year if you matched option #7. | ||
Line 32: | Line 40: | ||
| Cpus | 12 | 8 | 18 | 14 | 10 | 34 | 26 | 16 | 16 | 12 | total| | | Cpus | 12 | 8 | 18 | 14 | 10 | 34 | 26 | 16 | 16 | 12 | total| | ||
| Cores | 96 | 64 | 180 | 140 | 100 | 272 | 208 | 192 | 128 | 72 | physical| | | Cores | 96 | 64 | 180 | 140 | 100 | 272 | 208 | 192 | 128 | 72 | physical| | ||
- | | Tflops | + | | Tflops |
| Gpus | 48 | 16 | 36 | 28 | 20 | 34 | 26 | 16 | 28 | 60 | total| | | Gpus | 48 | 16 | 36 | 28 | 20 | 34 | 26 | 16 | 28 | 60 | total| | ||
| Cores | 209 | 74 | 157 | 72 | 92 | 75 | 67 | 74 | 72 | 138 | cuda K| | | Cores | 209 | 74 | 157 | 72 | 92 | 75 | 67 | 74 | 72 | 138 | cuda K| | ||
Line 43: | Line 51: | ||
| CPU | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | total| | | CPU | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | total| | ||
| | 4208 | 4208 | 5115 | 5115 | 5115 | 4208 | 4208 | 4214 | 4208 | 4208 | model| | | | 4208 | 4208 | 5115 | 5115 | 5115 | 4208 | 4208 | 4214 | 4208 | 4208 | model| | ||
- | | | silver | + | | | silver |
| | 2x8 | 2x8 | 2x10 | 2x10 | 2x10 | 2x8 | 2x8 | 2x12 | 2x8 | 2x8 | physical| | | | 2x8 | 2x8 | 2x10 | 2x10 | 2x10 | 2x8 | 2x8 | 2x12 | 2x8 | 2x8 | physical| | ||
| | 2.1 | 2.1 | 2.4 | 2.4 | 2.4 | 2.1 | 2.1 | 2.2 | 2.1 | 2.1 | Ghz| | | | 2.1 | 2.1 | 2.4 | 2.4 | 2.4 | 2.1 | 2.1 | 2.2 | 2.1 | 2.1 | Ghz| | ||
Line 64: | Line 72: | ||
* #1/#2 All GPU warranty requests will be filled by GPU maker. | * #1/#2 All GPU warranty requests will be filled by GPU maker. | ||
+ | * #7 up to 4 GPUs per node. Filling rack leaving 1U open between nodes, count=15 | ||
* #8 fills intended rack with AC in rack. GPU Tower/4U rack mount. | * #8 fills intended rack with AC in rack. GPU Tower/4U rack mount. | ||
* #8 includes NVLink connector (bridge kit). Up to 4 GPUs per node. | * #8 includes NVLink connector (bridge kit). Up to 4 GPUs per node. | ||
* Tariffs may affect all quotes when executed. | * Tariffs may affect all quotes when executed. | ||
* S&H included (or estimated) | * S&H included (or estimated) | ||
+ | * More than 4-6 nodes would be lots of work if Warewulf/ | ||
On the question of active versus passive cooling: | On the question of active versus passive cooling: |