User Tools

Site Tools


cluster:133

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:133 [2014/08/11 10:23]
hmeij [Ideas]
cluster:133 [2015/03/18 14:26] (current)
hmeij [High Core Count - Low Memory Footprint]
Line 10: Line 10:
 We're on the cusp of a new era! We're on the cusp of a new era!
  
 +
 +Other solutions than the one described below
 +
 +  * Amax 4U/288 cores [[http://www.amax.com/hpc/product.asp?value=High%20Density%20/%20Performance]]
 +  * Microway 2U/144 cores [[http://www.microway.com/products/hpc-clusters/high-performance-computing-with-intel-xeon-hpc-clusters/]]
 ==== Ideas ==== ==== Ideas ====
  
Line 27: Line 32:
   * 28 blades, 112 nodes, 4 nodes per blade, each node with   * 28 blades, 112 nodes, 4 nodes per blade, each node with
     * 1x Atom C2750 8 core 2.4 Ghz chip     * 1x Atom C2750 8 core 2.4 Ghz chip
-    * up 32 GB ram (4 GB per core, way above what's needed+    * up 32 GB ram (4 GB per core, way above what's needed)
     * 1x 2.5" disk     * 1x 2.5" disk
   * Virtual Media Over LAN (Virtual USB Floppy / CD and Drive Redirection)   * Virtual Media Over LAN (Virtual USB Floppy / CD and Drive Redirection)
Line 35: Line 40:
     * With that many nodes, /home would probably not be mounted     * With that many nodes, /home would probably not be mounted
     * So users would have to stage job data in /localscratch/JOBPID probably     * So users would have to stage job data in /localscratch/JOBPID probably
-    * ...+    * ... via scp from a target host
  
  
Line 43: Line 48:
 And then we need something that can handle ten of thousand of jobs if we acquire such a dense core platform. And then we need something that can handle ten of thousand of jobs if we acquire such a dense core platform.
  
-Enter [[https://computing.llnl.gov/linux/slurm/|Slurm]], which according to web site, " can sustain a throughput rate of over 120,000 jobs per hour"+Enter [[https://computing.llnl.gov/linux/slurm/|Slurm]], which according to their web site, "can sustain a throughput rate of over 120,000 jobs per hour"
  
 Now we're talking. Now we're talking.
 +
 +Notes on Slurm are [[cluster:134|High Core Count - Low Memory Footprint]]
  
 ==== Problem ==== ==== Problem ====
cluster/133.1407767008.txt.gz ยท Last modified: 2014/08/11 10:23 by hmeij