Update — Henk 2021/02/12 14:27
For CUDA_ARCH (or nvcc -arch
) versions check this Matching CUDA arch and CUDA gencode for various NVIDIA architectures web page. “When you compile CUDA code, you should always compile only one ‘-arch‘ flag that matches your most used GPU cards. This will enable faster runtime, because code generation will occur during compilation.” All Turing gpu models RTX2080, RTX5000 and RTX6000 use CUDA_ARCH sm_75 The former model is consumer grade, the latter two models are enterprise grade. See performance differences below. The consumer grade RTX3060Ti is CUDA ARCH sm_86 (Ampere).
A detailed review and comparison of GEForce gpus, including the Quadro RTX 5000 and RTX 2080 (Ti and S) can be found at thisNVIDIA Quadro RTX 5000 Review The Balanced Quadro GPU website. Deep Learning oriented performance results showing most of the applicable precision modes are on page 6 (INT8, FP16, FP32).
VendorB1 | Notes | VendorA1 | VendorA2 | |||||
---|---|---|---|---|---|---|---|---|
Head Node | incl switches | Head Node | Head Node | |||||
Rack | 1U | 1U | same | |||||
Power | 1+1 | 208V | 1+1 | same | ||||
Nic | 2x1G+4x10G | +PCI | 4x10G | same | ||||
Rails | 25 | 25-33 | same | |||||
CPU | 2x6226R | Gold | 2×5222 | same | ||||
cores | 2×16 | Physical | 2×4 | same | ||||
ghz | 2.9 | 3.8 | same | |||||
ddr4 | 192 | gb | 96 | same | ||||
hdd | 2x480G | ssd (raid1) | 2×960 | same | ||||
centos | 8 | yes | 8 | same | ||||
OpenHPC | yes | “best effort” | no | same | ||||
GPU Compute Node | GPU Compute Node | GPU Compute Node | ||||||
Rack | 2U | 4U | same | |||||
Power | 1 | 208V | 1+1 | same | ||||
Nic | 2x1G+2x10G | +PCI | 2x10G | same | ||||
Rails | ? | 26-36 | same | |||||
CPU | 2x4214R | Silver | 2x4214R | same | ||||
cores | 2×12 | Physical | 2×12 | same | ||||
ghz | 2.4 | 2.4 | same | |||||
ddr4 | 192 | gb | 192 | same | ||||
hdd | 480G | <ssd,sata> | 2T | same | ||||
centos | 8 | with gpu drivers, toolkit | 8 | same | ||||
GPU | 4x(RTX 5000) | active cooling | 4x(RTX 5000) | 4x(RTX 6000) | ||||
gddr6 | 16 | gb | 16 | 24 | ||||
Switch | 1x(8+1) | ←- add self spare! | 2x(16+2) | same | ||||
S&H | tbd | tbd | tbd | |||||
Δ | -5 | target budget $k | -2.8 | +1.5 |
From NVIDIA's GeForce forums web site
Quadro RTX 5000 vs RTX 2080 both have effective 14000Mhz GDDR6 both have 64 ROPS. 5000 has 16GB vs 2080's 8GB 5000 has 192 TMU's vs the 2080's 184 5000 has 3072 shaders vs the 2080's 2944 the 5000 has a base clock of 1350 and average boost to 1730 the 2080 has a base clock of 1515 and average boost to 1710 the 5000 has 384 tensor cores vs the 2080's 368. the 5000 has 48 RT cores vs the 2080's 46. 5000 Pixel Rate 110.7 GPixel/s Texture Rate 332.2 GTexel/s FP16 (half) performance 166.1 GFLOPS (1:64) FP32 (float) performance 10,629 GFLOPS FP64 (double) performance 332.2 GFLOPS (1:32) 2080 Pixel Rate 109.4 GPixel/s Texture Rate 314.6 GTexel/s FP16 (half) performance 157.3 GFLOPS (1:64) FP32 (float) performance 10,068 GFLOPS FP64 (double) performance 314.6 GFLOPS (1:32)
The next step in the evolution of our HPCC platform involves a new primary login node (from cottontail
to cottontail2
, to be purchased in early 2021) with a migration to OpenHPC platform and the Slurm scheduler. Proposals for one head node plus 2 compute nodes for a test and learn setup. Vastly different compute nodes so Slurm resource discovery and allocation can be tested. Along with scheduler Faishare policy. A chance to test out the A100 gpu.
Switching to RJ45 10GBase-T network in this migration. And adopting CentOS 8 (possibly the Stream version as events unfold … CentOS Stream or Rocky Linux).
Whoooo! Check this out https://almalinux.org/
Also sticking to a single private network for scheduler and home directory traffic, at 10G, for each node in the new environment. The second 10G interface (onboot=no) could be brought up for future use in some scenario. Maybe a second switch for network redundancy. Keep private network 192.168.x.x for openlava/warewulf6 traffic, and private network 10.10.x.x for slurm/warewulf8 traffic, avoids conflicts.
The storage network is on 1G, wonder if we could upgrade this later as 10G network grows (options were 6x1G or 4x10G). Or we move to 10G by adding replication partner in 3 years and switching roles between TrueNAS/ZFS units. (LACP the 6x1G into 3x2G)
Lots of old compute nodes will remain on 1G network. Maybe the newest hardware (n79-n90 nodes with RTX20280S gpus) could be upgraded to 10G using PCI cards?
VendorA | VendorB | VendorC | Notes | |
---|---|---|---|---|
Head Node | ||||
Rack | 1U | 1U | 1U | |
Power | 1+1 | 1+1 | 1+1 | 208V |
Nic | 4x10GB | 2x1G,2x10G | 4x10G | B:4x10G on PCI? |
Rails | 26-33 | 25 | ? | |
CPU | 2×5222 | 2x6226R | 2×5222 | Gold, Gold, Gold |
cores | 2×4 | 2×16 | 2×4 | Physical |
ghz | 3.8 | 2.9 | 3.8 | |
ddr4 | 96 | 192 | 96 | gb |
hdd | 2x960G | 2x480G | 2×480 | ssd, ssd, ssd (raid1) |
centos | 8 | 8 | no | |
OpenHPC | no | yes | no | y=“best effort” |
CPU Compute Node | ||||
Rack | 1U | 2U | 1U | |
Power | 1+1 | 1 | 1+1 | 208V |
Nic | 2x10G | 2x1G,2x10G | 2x10G | B:4x10G on PCI? |
Rails | 26-33 | ? | ? | |
CPU | 2x6226R | 2x6226R | 2x6226R | Gold, Gold, Gold |
cores | 2×16 | 2×16 | 2×16 | Physical |
ghz | 2.9 | 2.9 | 2.9 | |
ddr4 | 192 | 192 | 192 | gb |
hdd | 2T | 480G | 2x2T | sata, ssd, sata |
centos | 8 | 8 | no | |
CPU-GPU Compute Node | ||||
Rack | 4U | 2U | 1U | |
Power | 1+1 | 1 | 1+1 | 208V |
Nic | 2x10G | 2x1G,2x10G | 2x10G | B:4x10G on PCI? |
Rails | 26-36 | ? | ? | |
CPU | 2x4210R | 2x4214R | 2x4210R | Silver, Silver, Silver |
cores | 2×10 | 2×12 | 2×10 | Physical |
ghz | 2.4 | 2.4 | 2.4 | |
ddr4 | 192 | 192 | 192 | gb |
hdd | 2T | 480G | 2x2T | sata, ssd, sata |
centos | 8 | 8 | 8 | with gpu drivers, toolkit |
GPU | 1xA100 | 1xA100 | 1xA100 | can hold 4, passive |
hbm2 | 40 | 40 | 40 | gb memory |
mig | yes | yes | yes | up to 7 vgpus |
sdk | ? | - | - | |
ngc | ? | - | - | |
Switch | add! | 8+1 | 16+2 | NEED 2 OF THEM? |
S&H | incl | tbd | tbd | |
Δ | +2.4 | +4.4 | +1.6 | target budget $k |
GFLOPS = #chassis * #nodes/chassis * #sockets/node * #cores/socket * GHz/core * FLOPs/cycle
Note that the use of a GHz processor yields GFLOPS of theoretical performance. Divide GFLOPS by 1000 to get TeraFLOPS or TFLOPS.
http://en.community.dell.com/techcenter/high-performance-computing/w/wiki/2329