User Tools

Site Tools


cluster:95

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
cluster:95 [2013/01/03 09:33]
hmeij
cluster:95 [2013/07/24 11:00] (current)
hmeij [Newest Configuration]
Line 6: Line 6:
 ==== Newest Configuration ==== ==== Newest Configuration ====
  
-The Academic Compute Cluster is comprised of two login nodes (greentail and swallotail).  Three types of compute nodes are available via the Lava scheduler: +The Academic High Performance Compute Cluster is comprised of two login nodes (greentail and swallowtail, both Dell PowerEdge 2050s).  Old login node petaltail (Dell PowerEdge 2950) can be used for testing code (does not matter if it crashes, it's primary duty is backup to physical tape library). 
 + 
 +Three types of compute nodes are available via the Lava scheduler: 
  
   * 36 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects.  288 job slots. Total memory footprint of these nodes is 384 GB. This cluster has been measured at 1.5 teraflops (using Linpack)   * 36 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects.  288 job slots. Total memory footprint of these nodes is 384 GB. This cluster has been measured at 1.5 teraflops (using Linpack)
   * 32 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB.  256 job slots. Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, rest on gigabit ethernet switches. This cluster has been measured at 665 gigaflops (using Linpack)   * 32 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB.  256 job slots. Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, rest on gigabit ethernet switches. This cluster has been measured at 665 gigaflops (using Linpack)
-  * 45 node with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24GB.  90 job slots. Total memory footprint of the cluster is 1.1 TB. This cluster has an estimated capacity of 500-700 gigaflops.+  * 45 node with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24 GB.  90 job slots. Total memory footprint of the cluster is 1.1 TB. This cluster has an estimated capacity of 500-700 gigaflops.
  
 All queues are available for job submissions via the login nodes greentail and swallowtail; both nodes service all queues. Our total job slots is now 634 of which 380 are on infiniband switches for parallel computational jobs.  In addition  queue "bss24" consists of 90 job slots (45 nodes) which can provide access to 1.1 TB of memory; it is turned on by request (nodes are power inefficient). All queues are available for job submissions via the login nodes greentail and swallowtail; both nodes service all queues. Our total job slots is now 634 of which 380 are on infiniband switches for parallel computational jobs.  In addition  queue "bss24" consists of 90 job slots (45 nodes) which can provide access to 1.1 TB of memory; it is turned on by request (nodes are power inefficient).
  
-Home directory file system are provided (via NFS or IPoIB) by the login "greentail" from a direct attached disk array. In total, 10 TB of disk space is accessible to the users. Backup services are provided via disk-to-disk snapshot copies on the same array. +Home directory file system are provided (via NFS or IPoIB) by the login node "greentail" from a direct attached disk array. In total, 10 TB of /home disk space is accessible to the users and 5 TB of scratch space at /sanscratch.  In addition all nodes provide a small /localscratch disk space on the nodes local internal disk if file locking is needed (about 50 GB). Backup services are provided via disk-to-disk snapshot copies on the same array. 
  
 ==== New Configuration ==== ==== New Configuration ====
cluster/95.1357223624.txt.gz · Last modified: 2013/01/03 09:33 by hmeij