This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:184 [2019/09/27 12:45] hmeij07 [AWS deploys T4] |
cluster:184 [2019/11/18 15:08] hmeij07 [AWS deploys T4] |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
+ | ==== Turing/ | ||
+ | |||
+ | * https:// | ||
==== AWS deploys T4 ==== | ==== AWS deploys T4 ==== | ||
Line 6: | Line 10: | ||
* https:// | * https:// | ||
- | Look at this, the smallest Elastic Cloud Compute Instances are **g4dn.xlarge** yielding access to 4 vCPUs and 1x T4 GPU. The largest is **g4dn.16xlarge** yielding access to 64 vCPUs and 1x T4 GPUs. Now the smallest is priced at $0.526/hr, and running that card 24/7 for a year is a cost of $4,607.76 ... meaning ... option #7 below with 26 GPUs would cost you a whopping $119,802. Annually! That's the low tide water mark. | + | Look at this, the smallest Elastic Cloud Compute Instances are **g4dn.xlarge** yielding access to 4 vCPUs, 16GiB memory |
The high tide water mark? The largest instance is priced at $4.352 and would cost you near one million dollars to run per year if you matched option #7. | The high tide water mark? The largest instance is priced at $4.352 and would cost you near one million dollars to run per year if you matched option #7. | ||
Line 46: | Line 50: | ||
| Gpus | 48 | 16 | 36 | 28 | 20 | 34 | 26 | 16 | 28 | 60 | total| | | Gpus | 48 | 16 | 36 | 28 | 20 | 34 | 26 | 16 | 28 | 60 | total| | ||
| Cores | 209 | 74 | 157 | 72 | 92 | 75 | 67 | 74 | 72 | 138 | cuda K| | | Cores | 209 | 74 | 157 | 72 | 92 | 75 | 67 | 74 | 72 | 138 | cuda K| | ||
- | | Cores | 26 | 9 | 20 | | + | | Cores | 26 | 9 | 20 | |
| Tflops | | Tflops | ||
| Tflops | | Tflops |