Differences
This shows you the differences between two versions of the page.
Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
cluster:191 [2020/01/24 19:29] hmeij07 [NewsBytes for Jan 2020] |
cluster:191 [2020/01/24 21:36] (current) hmeij07 |
https://dokuwiki.wesleyan.edu/doku.php?id=cluster:190 | https://dokuwiki.wesleyan.edu/doku.php?id=cluster:190 |
| |
The HPCC has invested in a new solution for our Home Directories file server. The TrueNAS/ZFS solution selected is described here [[cluster:186|Home Dir Server]]. We will implement with very large user quotas. The storage is 190 TB usable with inline compression (475 TB effective usable if compression ratio achieved is 2.5x). Other features include; unlimited snapshots point in time restores), read cache SSD, write cache SSD, self-healing (checksums on reads and writes and per schedule), RAIDZ2 protection, high availability (dual controllers). We will not implement de-duplication. Maybe add replication in the future. This will take along time to deploy. \\ | The HPCC has invested in a new solution for our Home Directories file server. The TrueNAS/ZFS solution selected is described here [[cluster:186|Home Dir Server]]. We will implement with very large user quotas. The storage is 190 TB usable with inline compression (475 TB effective usable if compression ratio achieved is 2.5x, scalable to 1.2 P raw). Other features include; unlimited snapshots (point in time restores), read cache SSD, write cache SSD, self-healing (checksums on reads and writes and per schedule), RAIDZ2 protection, high availability (dual controllers). We will not be implementing de-duplication. Maybe add replication in the future. This will take along time to deploy. \\ |
https://dokuwiki.wesleyan.edu/doku.php?id=cluster:186 | https://dokuwiki.wesleyan.edu/doku.php?id=cluster:186 |
| |
The HPCC has also invested in more GPU and CPU compute capacity. At the time of this writing, 12 nodes are crossing Iowa from CA headed our way. A total for 48 gpus (model rtx2080s and 384 GB memory), 24 cpus (228 physical cores and 1,152 GB memory). Details of the selection process can be found here [[cluster:181|2019 GPU Models]]\\ | The HPCC has also invested in more GPU and CPU compute capacity. At the time of this writing, 12 nodes are crossing Iowa from CA headed our way. A total for 48 gpus (model rtx2080s with 384 GB memory), 24 cpus (228 physical cores with 1,152 GB memory). Details of the selection process can be found here [[cluster:184|Turing/Volta/Pascal]]\\ |
https://dokuwiki.wesleyan.edu/doku.php?id=cluster:184 | https://dokuwiki.wesleyan.edu/doku.php?id=cluster:184 |
| |
| |
| |
Lots of work! | Lots of work! Lots to learn! |
| |
\\ | \\ |
**[[cluster:0|Back]]** | **[[cluster:0|Back]]** |