User Tools

Site Tools


cluster:142

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:142 [2019/12/09 13:05]
hmeij07
cluster:142 [2020/02/27 08:59] (current)
hmeij07
Line 1: Line 1:
 \\ \\
 **[[cluster:​0|Back]]** **[[cluster:​0|Back]]**
 + 
 ===== Scratch Spaces ===== ===== Scratch Spaces =====
  
-We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August ​2015.+We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August ​2019.
  
   * **/​localscratch**   * **/​localscratch**
Line 11: Line 11:
  
   * **/​sanscratch** ​   * **/​sanscratch** ​
-    * 55 TB file system mounted IpoIB using NFS or plain ethernet+    * 55 TB file system mounted IpoIB using NFS or plain Ethernet
       * greentail52 is the file server       * greentail52 is the file server
       * /​sanscratch/​username/​ can be used for staging (this is not backed up!)       * /​sanscratch/​username/​ can be used for staging (this is not backed up!)
Line 20: Line 20:
     * 5 TB file system provided by local drives (3x2TB, Raid 0) on each node in the ''​mw256fd''​ queue     * 5 TB file system provided by local drives (3x2TB, Raid 0) on each node in the ''​mw256fd''​ queue
     * The list of nodes done: n38-n45, all are done (10sep15)     * The list of nodes done: n38-n45, all are done (10sep15)
 +
 +  * **/​localscratch**
 +    * 2 TB file system on nodes in queue ''​mw128''​ (n60-n77)
 +
 +  * **/​localscratch**
 +    * ~800GB file system on nodes in queue ''​exx96''​ (n79-n90) on SSD NVMe
 +
  
 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue ''​mw256fd''​. That yields 5TB of local scratch space per node using Raid 0 and file type ''​ext4'',​ mounted at /​localscratch5tb. Everybody may use this but it has specifically been put in place for Gaussian jobs yielding massive RWF files (application scratch files). 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue ''​mw256fd''​. That yields 5TB of local scratch space per node using Raid 0 and file type ''​ext4'',​ mounted at /​localscratch5tb. Everybody may use this but it has specifically been put in place for Gaussian jobs yielding massive RWF files (application scratch files).
cluster/142.1575914700.txt.gz ยท Last modified: 2019/12/09 13:05 by hmeij07