User Tools

Site Tools


cluster:142

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:142 [2017/02/03 09:06]
hmeij07
cluster:142 [2020/02/27 08:59] (current)
hmeij07
Line 1: Line 1:
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
 + 
 ===== Scratch Spaces ===== ===== Scratch Spaces =====
  
-We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August 2015.+We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August 2019.
  
   * **/localscratch**   * **/localscratch**
     * Local to each node, different sizes roughly around 50-80 GB     * Local to each node, different sizes roughly around 50-80 GB
-    * Warning: on nodes n46-n59 there is no hard disk but a SataDOM (usb device pluggedt directly into sysem board, 16 GB in size, holds just the OS). Do not use /localscratch on these nodes.+    * Warning: on nodes n46-n59 there is no hard disk but a SataDOM (usb device plugged directly into system board, 16 GB in size, holds just the OS). Do not use /localscratch on these nodes.
  
   * **/sanscratch**    * **/sanscratch** 
-    * 33 TB file system mounted IpoIB using NFS or plain ethernet +    * 55 TB file system mounted IpoIB using NFS or plain Ethernet 
-      * greentail is the file server+      * greentail52 is the file server
       * /sanscratch/username/ can be used for staging (this is not backed up!)       * /sanscratch/username/ can be used for staging (this is not backed up!)
 +      * /sanscratch/checkpoints/JOBPID is for checkpoint files (you need to create this in your job)
  
  
Line 19: Line 20:
     * 5 TB file system provided by local drives (3x2TB, Raid 0) on each node in the ''mw256fd'' queue     * 5 TB file system provided by local drives (3x2TB, Raid 0) on each node in the ''mw256fd'' queue
     * The list of nodes done: n38-n45, all are done (10sep15)     * The list of nodes done: n38-n45, all are done (10sep15)
 +
 +  * **/localscratch**
 +    * 2 TB file system on nodes in queue ''mw128'' (n60-n77)
 +
 +  * **/localscratch**
 +    * ~800GB file system on nodes in queue ''exx96'' (n79-n90) on SSD NVMe
 +
  
 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue ''mw256fd''. That yields 5TB of local scratch space per node using Raid 0 and file type ''ext4'', mounted at /localscratch5tb. Everybody may use this but it has specifically been put in place for Gaussian jobs yielding massive RWF files (application scratch files). 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue ''mw256fd''. That yields 5TB of local scratch space per node using Raid 0 and file type ''ext4'', mounted at /localscratch5tb. Everybody may use this but it has specifically been put in place for Gaussian jobs yielding massive RWF files (application scratch files).
cluster/142.1486130783.txt.gz · Last modified: 2017/02/03 09:06 (external edit)