This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:184 [2019/09/12 14:00] hmeij07 |
cluster:184 [2019/09/27 12:38] hmeij07 [AWS deploys T4] |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
+ | ==== AWS deploys T4 ==== | ||
+ | |||
+ | * https:// | ||
+ | |||
+ | Look at this, the smallest Elastic Cloud Compute Instances are **g4dn.xlarge** yielding access to 4 vCPUs and 1x T4 GPU. The largest is **g4dn.16xlarge** yielding access to 64 vCPUs and 1x T4 GPUs. Now the smallest is priced at $0.526/hr, and running that card 24/7 for a year is a cost of $4,607.76 ... meaning ... option #7 below with 26 GPUs would cost you a whopping $119,802. Annually! That's the low tide water mark. | ||
+ | |||
+ | The high tide water mark? The largest instance is priced at $4.352 and would cost you near one million dollars to run per year if you matched option #7. | ||
Line 64: | Line 72: | ||
* #1/#2 All GPU warranty requests will be filled by GPU maker. | * #1/#2 All GPU warranty requests will be filled by GPU maker. | ||
+ | * #7 up to 4 GPUs per node. Filling rack leaving 1U open between nodes, count=15 | ||
* #8 fills intended rack with AC in rack. GPU Tower/4U rack mount. | * #8 fills intended rack with AC in rack. GPU Tower/4U rack mount. | ||
* #8 includes NVLink connector (bridge kit). Up to 4 GPUs per node. | * #8 includes NVLink connector (bridge kit). Up to 4 GPUs per node. |