User Tools

Site Tools


cluster:142

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:142 [2015/08/20 14:02]
hmeij
cluster:142 [2020/02/27 08:59] (current)
hmeij07
Line 1: Line 1:
 \\ \\
 **[[cluster:​0|Back]]** **[[cluster:​0|Back]]**
 + 
 ===== Scratch Spaces ===== ===== Scratch Spaces =====
  
-We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August ​2015.+We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August ​2019.
  
   * **/​localscratch**   * **/​localscratch**
     * Local to each node, different sizes roughly around 50-80 GB     * Local to each node, different sizes roughly around 50-80 GB
-    * Warning: on nodes n46-n59 there is no hard disk but a SataDOM (usb device ​pluggedt ​directly into sysem board, 16 GB in size, holds just the OS). Do not use /​localscratch on these nodes.+    * Warning: on nodes n46-n59 there is no hard disk but a SataDOM (usb device ​plugged ​directly into system ​board, 16 GB in size, holds just the OS). Do not use /​localscratch on these nodes.
  
   * **/​sanscratch** ​   * **/​sanscratch** ​
-    * Two 5 TB file systems ​mounted IpoIB using NFS +    * 55 TB file system ​mounted IpoIB using NFS or plain Ethernet 
-      * One from greentail'​s disk array for nodes n1-n32 and b0-b50 +      * greentail52 is the file server 
-      * One from sharptail'​s disk array fro all other nodes+      * /​sanscratch/​username/​ can be used for staging (this is not backed up!) 
 +      * /​sanscratch/​checkpoints/​JOBPID is for checkpoint files (you need to create this in your job)
  
  
   * **/​localscratch5tb**   * **/​localscratch5tb**
-    * 5 TB file system provided by local drives (3x2TB, Raid 0) on nodes in the ''​mw256fd''​ queue +    * 5 TB file system provided by local drives (3x2TB, Raid 0) on each node in the ''​mw256fd''​ queue 
-    * The list of nodes done: n44+    * The list of nodes done: n38-n45, all are done (10sep15) 
 + 
 +  * **/​localscratch** 
 +    * 2 TB file system on nodes in queue ''​mw128''​ (n60-n77) 
 + 
 +  * **/​localscratch** 
 +    * ~800GB file system on nodes in queue ''​exx96''​ (n79-n90) on SSD NVMe 
  
 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue ''​mw256fd''​. That yields 5TB of local scratch space per node using Raid 0 and file type ''​ext4'',​ mounted at /​localscratch5tb. Everybody may use this but it has specifically been put in place for Gaussian jobs yielding massive RWF files (application scratch files). 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue ''​mw256fd''​. That yields 5TB of local scratch space per node using Raid 0 and file type ''​ext4'',​ mounted at /​localscratch5tb. Everybody may use this but it has specifically been put in place for Gaussian jobs yielding massive RWF files (application scratch files).
  
-**Note: Everybody is welcome to store content in ''/​localscratch5tb/​username/'' ​permanently ​for easy job access of large data files unless it interferes with jobs. However be warned that a) it's local storage, b) it's raid 0 (one disk failures and all data is lost), ​and c) it's like /tmp read and write permission for all (do ''​chmod go-rwx /​localscratch5tb/​username''​ for some protection. In addition, ''/​sanscratch/​username/''​ will also be allowed.**+**Note: Everybody is welcome to store content in ''/​localscratch5tb/​username/''​ for easy job access of large data files unless it interferes with jobs. However be warned that a) it's local storage, b) it's raid 0 (one disk failures and all data is lost), c) it's like /tmp read and write permission for all (do ''​chmod go-rwx /​localscratch5tb/​username''​ for some protection, and d) this file system is not backed up. In addition, ''/​sanscratch/​username/''​ will also be allowed.**
    
  
cluster/142.1440093762.txt.gz · Last modified: 2015/08/20 14:02 by hmeij