This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:167 [2018/06/26 16:58] hmeij07 [CPU vs GPU] |
cluster:167 [2018/06/28 11:46] hmeij07 [CPU vs GPU] |
||
---|---|---|---|
Line 6: | Line 6: | ||
So the question was raised what does our usage look like between CPU and GPU devices? I have no idea what the appropriate metrics would be but lets start with comparing the hardware deployed. We'll also need to make some assumptions | So the question was raised what does our usage look like between CPU and GPU devices? I have no idea what the appropriate metrics would be but lets start with comparing the hardware deployed. We'll also need to make some assumptions | ||
- | * Data is period | + | * Data is period |
- | * That period | + | * Maybe build monthly script if this turns out to be usable info |
+ | * That period | ||
* Assume 99% utilization of cpu core or gpu device | * Assume 99% utilization of cpu core or gpu device | ||
- | * Available time is measured per cpu core but by gpu device | + | * Available time is measured per physical |
* There is no good/bad metric | * There is no good/bad metric | ||
* Never collated such data before | * Never collated such data before | ||
* The GPU usage is based on detecting gpu reservations (gpu= flag) | * The GPU usage is based on detecting gpu reservations (gpu= flag) | ||
- | * | ||
- | ^ Metric ^ CPU ^ Ratio ^ GPU ^ Notes ^ | + | |
- | | Device Count | 72 | 3:1 | 24 | cpu all intel, gpu all nvidia | | + | ^ Metric ^ CPU ^ Ratio ^ GPU ^ Notes June 2018 ^ |
- | | Core Count | 1,712 | 1:38 | 64,300 | physical | + | | Device Count | 72 | 3:1 | 24 | cpu all intel, gpu all nvidia | |
- | | Avail Hours | 7,272,576 | 71:1 | 101,952 | total cpu cores, total gpus | | + | | Core Count | 1,192 | 1:54 |
- | | Job Count | 19,043 | 4:1 | 4,765 | scheduled jobs irregardless of exit status | | + | | Memory | 7,408 | 51:1 | 144 | GB | |
+ | | Teraflops | 38 | 1.5:1 | 25 | double precision, floating point, theoretical | | ||
+ | | Job Count | 2,834 | 3:1 | 1,045 | scheduled jobs irregardless of exit status | ||
+ | | Avail Hours | 715,200 | 50:1 | 14,400 | total cpu cores, total gpus | | ||
+ | | Job Hours | 221, | ||
+ | | Job Hours % | 31 | 6:1 | 5 | as a percentage | | ||
+ | | Avail Hours2 | 561, | ||
+ | | Job Hours % | 39 | 8:1 | 5 | more realistic...hp12 rarely used in June18| | ||
+ | |||
+ | The logs showing gpu %util confirm the extremely low GPU usage. When concatenating the four gpu %util values into a string, since 01Jan2017, the string ' | ||
+ | |||
+ | So were these 25 days in June 2018 an oddity? March is Honors' | ||
+ | |||
+ | ^ Total Monthly CPU+GPU Hours ^^^^^^^^^^^ | ||
+ | ^Ju17^Aug17^Sep17^Oct17^Nov17^Dec17^Jan18^Feb18^Mar18^Apr18^May18^ | ||
+ | |313, | ||
+ | |||
+ | ^ Metric ^ CPU ^ Ratio ^ GPU ^ Notes July 2017 ^ | ||
+ | | Device Count | 72 | | ||
+ | | Core Count | 1,192 | 1:42 | 50,000 | physical only | | ||
+ | | Memory | 7,408 | 74:1 | 100 | GB | | ||
+ | | Teraflops | 38 | 1.7:1 | 23 | double precision, floating point, theoretical | | ||
+ | | Job Count | | ||
+ | | Avail Hours | 886, | ||
+ | | Job Hours | 260, | ||
+ | | Job Hours % | 30 | 1:1 | 26 | as a percentage | | ||
+ | | Avail Hours2 | 696, | ||
+ | | Job Hours % | 37 | 1.5:1 | 26 | more realistic...hp12 rarely used in June18| | ||
+ | |||
+ | * Some noise in this data with the inability to match start and end of job (~15% of records) | ||
+ | * The assumption that '' | ||
+ | * 939591860 | ||
**[[cluster: | **[[cluster: |