cluster:136
Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| cluster:136 [2015/02/10 21:46] – created hmeij | cluster:136 [2020/07/28 17:21] (current) – hmeij07 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| \\ | \\ | ||
| **[[cluster: | **[[cluster: | ||
| + | |||
| + | ''/ | ||
| + | |||
| + | --- // | ||
| ==== HomeDir & Storage Options ==== | ==== HomeDir & Storage Options ==== | ||
| Line 6: | Line 10: | ||
| The HPCC cluster' | The HPCC cluster' | ||
| - | * All users are under quota which automatically | + | * All users are under quota which automatically |
| * When a user consumes 1024 GB (1 TB) the automatic increases stop. | * When a user consumes 1024 GB (1 TB) the automatic increases stop. | ||
| * this home file system is twice a month backed up from sharptail' | * this home file system is twice a month backed up from sharptail' | ||
| - | * nightly snapshots (point in time backups) are done on sharptail disk array and stored there too | + | * nightly snapshots (point in time backups) are done on sharptail' |
| - | At this point users need to off load static content to other locations. | + | At this point users need to off load static content to other locations. |
| - | * Keep contents out of /home and migrate it to /archives (7 TB) | + | * Keep contents out of /home and migrate it to /archives (7 TB, accessible on all " |
| * request a directory for you in this file system and move contents to it | * request a directory for you in this file system and move contents to it | ||
| * this archive file system is twice a month backed up from sharptail' | * this archive file system is twice a month backed up from sharptail' | ||
| - | * Users with home directories of 500 GB in size should start considering moving data to /archives | + | * Users with home directories of 500+ GB in size should start considering moving data to /archives |
| Users whom are considered inactive have their home directories relocated to / | Users whom are considered inactive have their home directories relocated to / | ||
| + | |||
| * these accounts are kept around until we do an account edit and purge (has never happened so far) | * these accounts are kept around until we do an account edit and purge (has never happened so far) | ||
| - | | + | |
| + | The remote storage option, if your storage needs cannot be supported by /archives, is off-cluster storage. Rstore is our latest storage solution for groups and labs with such needs. | ||
| + | |||
| + | | ||
| + | * then move your static content permanently off the HPCC cluster environment | ||
| + | * details can be found at [[cluster: | ||
| + | |||
| + | ==== Moving Content ==== | ||
| + | |||
| + | Our file server is named '' | ||
| + | |||
| + | |||
| + | Do not use any type of copy tool with a GUI or cp/scp or s/ftp. Especially the GUI (drag& | ||
| + | |||
| + | **Check it out:** | ||
| + | |||
| + | * '' | ||
| + | * is the server busy ('' | ||
| + | * is there memory available ('' | ||
| + | * is anybody else using rsync ('' | ||
| + | * is the server busy writing ('' | ||
| + | |||
| + | Three scenarios are depicted below. When crossing the vertical boundaries you are not dealing with local content anymore, thus the content needs to flow over the network. '' | ||
| + | |||
| + | < | ||
| + | |||
| + | | / | ||
| + | some lab location | ||
| + | < | ||
| + | some other college | ||
| + | | / | ||
| + | |||
| + | </ | ||
| + | |||
| + | **Some feature examples** | ||
| + | |||
| + | * preserve permissions, | ||
| + | * '' | ||
| + | * delete files on destination not present on source (careful!) | ||
| + | * '' | ||
| + | * throttle the rate of traffic generated, make your sysadmin happy, use <5000 | ||
| + | * '' | ||
| + | * do not look inside files | ||
| + | * '' | ||
| + | * use a remote shell from host to host (crossing those vertical boundaries above) | ||
| + | * '' | ||
| + | |||
| + | Note the use of trailing slashes, it means update everything inside source '' | ||
| + | |||
| + | ** Putting it all together ** | ||
| + | |||
| + | < | ||
| + | |||
| + | # copy the dir stuff from lab or remote college to my home on HPCC in tmp area | ||
| + | # (first log in to remote location) | ||
| + | |||
| + | rsync -vac --bwlimit=2500 --whole-files / | ||
| + | |||
| + | # sync my HPCC dir stuff folder into /archives locally on sharptail, then clean up | ||
| + | # (first log in to sharptail) | ||
| + | |||
| + | rsync -vac --bwlimit=2500 / | ||
| + | rm -rf / | ||
| + | |||
| + | # generate a copy of content on Rstore disk array outside of HPCC but within wesleyan.edu | ||
| + | # (get paths and share names from faculty member, on sharptail do) | ||
| + | |||
| + | rsync -vac --bwlimit=2500 / | ||
| + | |||
| + | # you can also do this in reverse, log in to sharptail first | ||
| + | |||
| + | rsync -vac --bwlimt=2500 user@rstoresrv0.wesleyan.edu:/ | ||
| + | |||
| + | </ | ||
| \\ | \\ | ||
cluster/136.1423604788.txt.gz · Last modified: (external edit)
