Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:95 [DokuWiki]

User Tools

Site Tools


cluster:95

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
cluster:95 [2011/10/07 15:31]
hmeij
cluster:95 [2012/11/08 17:46]
hmeij
Line 6: Line 6:
 The Academic Compute Cluster is comprised of two login nodes (greentail and swallotail).  Three types of compute nodes are available via the Lava scheduler:  The Academic Compute Cluster is comprised of two login nodes (greentail and swallotail).  Three types of compute nodes are available via the Lava scheduler: 
  
-  * 36 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects.  Total memory footprint of these nodes is 384 GB. This cluster has been measured at 1.5 teraflops (using Linpack) +  * 36 nodes with dual quad core (Xeon 5620, 2.4 Ghz) sockets in HP blades (SL2x170z G6) with memory footprints of 12 GB each, all on infiniband (QDR) interconnects.  288 job slots. Total memory footprint of these nodes is 384 GB. This cluster has been measured at 1.5 teraflops (using Linpack) 
-  * 32 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB.  Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, rest on gigabit ethernet switches. This cluster has been measured at 665 gigaflops (using Linpack) +  * 32 nodes with dual quad core (Xeon 5345, 2.3 Ghz) sockets in Dell PowerEdge 1950 rack servers with memory footprints ranging from 8 GB to 16 GB.  256 job slots. Total memory footprint of these nodes is 340 GB. Only 16 nodes are on infiniband (SDR) interconnects, rest on gigabit ethernet switches. This cluster has been measured at 665 gigaflops (using Linpack) 
-  * 45 node with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24GB.  Total memory footprint of the cluster is 1.1 TB. This cluster has an estimated capacity of 500-700 gigaflops.+  * 45 node with dual single core AMD Opteron Model 250 (2.4 Ghz) with a memory footprint of 24GB.  90 job slots. Total memory footprint of the cluster is 1.1 TB. This cluster has an estimated capacity of 500-700 gigaflops.
  
-All queues are available for job submissions via the login nodes greentail and swallowtail; both nodes service all queues. Our total job slots is now 638 of which 384 are on infiniband switches for parallel computational jobs.  In addition  queue "bss24" consists of an additional 90 job slots (45 nodes) which can provide access to 1 TB of memory; it is turned on by request (nodes are power inefficient).+All queues are available for job submissions via the login nodes greentail and swallowtail; both nodes service all queues. Our total job slots is now 634 of which 380 are on infiniband switches for parallel computational jobs.  In addition  queue "bss24" consists of 90 job slots (45 nodes) which can provide access to 1.1 TB of memory; it is turned on by request (nodes are power inefficient).
  
 Home directory file system are provided (via NFS or IPoIB) by the login "greentail" from a direct attached disk array. In total, 10 TB of disk space is accessible to the users. Backup services are provided via disk-to-disk snapshot copies on the same array.  Home directory file system are provided (via NFS or IPoIB) by the login "greentail" from a direct attached disk array. In total, 10 TB of disk space is accessible to the users. Backup services are provided via disk-to-disk snapshot copies on the same array. 
cluster/95.txt · Last modified: 2013/07/24 11:00 by hmeij