This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:142 [2015/08/10 19:02] hmeij |
cluster:142 [2020/02/27 13:59] (current) hmeij07 |
||
---|---|---|---|
Line 1: | Line 1: | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: | ||
+ | |||
===== Scratch Spaces ===== | ===== Scratch Spaces ===== | ||
- | We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August | + | We have different locations for scratch space. Some local to the nodes, some mounted across the network. Here is the current setup as of August |
* **/ | * **/ | ||
* Local to each node, different sizes roughly around 50-80 GB | * Local to each node, different sizes roughly around 50-80 GB | ||
- | * Warning: on nodes n46-n59 there is no hard disk but a SataDOM (usb device | + | * Warning: on nodes n46-n59 there is no hard disk but a SataDOM (usb device |
* **/ | * **/ | ||
- | * Two 5 TB file systems | + | * 55 TB file system |
- | * One from greentail' | + | * greentail52 is the file server |
- | * One from sharptail' | + | * / |
+ | * / | ||
* **/ | * **/ | ||
- | * 5 TB file system provided by local drives (3x2TB, Raid 0) on nodes in the '' | + | * 5 TB file system provided by local drives (3x2TB, Raid 0) on each node in the '' |
- | * The list of nodes done: n44 | + | * The list of nodes done: n38-n45, all are done (10sep15) |
+ | |||
+ | * **/ | ||
+ | * 2 TB file system on nodes in queue '' | ||
+ | |||
+ | * **/ | ||
+ | * ~800GB file system on nodes in queue '' | ||
48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue '' | 48 TB of local scratch space will be made available in 6 TB chunks on the nodes in the queue '' | ||
+ | |||
+ | **Note: Everybody is welcome to store content in ''/ | ||
+ | |||
You need to change your working directory to the location the scheduler has made for you. Also save your output before the job terminates, the scheduler will remove that working directory. Here is the workflow... | You need to change your working directory to the location the scheduler has made for you. Also save your output before the job terminates, the scheduler will remove that working directory. Here is the workflow... |