User Tools

Site Tools


cluster:126

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
cluster:126 [2025/10/07 19:15] hmeij07cluster:126 [2025/10/07 19:28] (current) hmeij07
Line 53: Line 53:
   * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each.  Known as the "desktop" racks. ''mwgpu256'', nodes 108-n117.   * 10 nodes with dual 12 core chips (Xeon Silver 4410Y CPU @ 3.9 Ghz), Emerald Rapids Microway servers with a memory footprint of 256 GB (2,560 GB, about 90 teraflops dpfp). These nodes hold four RTX4070Ti-Super gpus each.  Known as the "desktop" racks. ''mwgpu256'', nodes 108-n117.
  
-  * 1 node with dual twelve core chips (Xeon Silver 4520, 2.40 Ghz) in ASUS ESC4000-E11 2U rack server with a memory footprint of 512 GB ( about 4 teraflops  dpfp). This node has four NVIDIA RTX PRO™ 6000 Blackwell Max-Q Workstation Edition  (96 GB memory footprint) gpus providing 440 teraflops (mixed mode). Known as the "Blackwell" gpu server, node n91, queue exx512, 48 job slots.+  * 1 node with dual twelve core chips (Xeon Silver 4520, 2.40 Ghz) in ASUS ESC4000-E11 2U rack server with a memory footprint of 512 GB ( about 4 teraflops  dpfp). This node has four NVIDIA RTX PRO™ 6000 Blackwell Max-Q Workstation Edition  (96 GB memory footprint) gpus providing 440 teraflops (mixed mode). Known as the "Blackwell" gpu server, node n91, queue exx512, 48 job slots. [[https://www.techpowerup.com/gpu-specs/rtx-pro-6000-blackwell-max-q.c4273]]
  
 All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256).  Our total job slot count is roughly 2,672 with our physical core count 1,336. Our total teraflops compute capacity is about 92 cpu side and 2,902 gpu side (mixed mode). Our total memory footprint is about 1,584 GB gpu side and 13,524 GB cpu side. All queues are available for job submissions via cottontail2 login node. Some nodes on Infiniband switches for parallel computational jobs (queues: mw256fd, hp12, mw256).  Our total job slot count is roughly 2,672 with our physical core count 1,336. Our total teraflops compute capacity is about 92 cpu side and 2,902 gpu side (mixed mode). Our total memory footprint is about 1,584 GB gpu side and 13,524 GB cpu side.
Line 76: Line 76:
 |  mw256  |    |  256  |  672  | EDR infiniband  | n102-n102 |  CPU  | |  mw256  |    |  256  |  672  | EDR infiniband  | n102-n102 |  CPU  |
 |  mwgpu256  |   10  |  256  |  480  | gigabit ethernet  | n108-n117 |  GPU & CPU  | |  mwgpu256  |   10  |  256  |  480  | gigabit ethernet  | n108-n117 |  GPU & CPU  |
 +|  exx512  |    |  512  |  48  | gigabit ethernet  | n91 |  GPU & CPU  |
  
 Some guidelines for appropriate queue usage with detailed page links: Some guidelines for appropriate queue usage with detailed page links:
Line 111: Line 112:
   * mwgpu256, contains 40 RTX4070Ti-Super gpus   * mwgpu256, contains 40 RTX4070Ti-Super gpus
     * same setup as exx96 queue     * same setup as exx96 queue
 +  * exx512, contains 4 NVIDIA RTX PRO™ 6000 Blackwell Max-Q Workstation Edition GPUs
 +    * beware of preemptive events, checkpoint!
 +    * Priority access for Antonio's lab till 10/7/2028
  
 **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler. **NOTE**: we are migrating from Openlava to Slurm during summer 2022. All queues except ''hp12'' and ''mw256fd'' will be service by ''cottontail2'' Slurm scheduler.
cluster/126.1759864548.txt.gz · Last modified: by hmeij07