cluster:159
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| cluster:159 [2017/03/29 18:04] – hmeij07 | cluster:159 [2017/03/29 19:39] (current) – [HPC Survey 2017] hmeij07 | ||
|---|---|---|---|
| Line 4: | Line 4: | ||
| ===== HPC Survey 2017 ===== | ===== HPC Survey 2017 ===== | ||
| - | " | + | " |
| - | networks, large home directories, | + | |
| - | Please answer any General or Technical questions that apply to you or your group. The intent of this survey is to properly size and identify components our computational | + | Please answer any **General** and/or **Technical** questions that apply to you and/or your group. The intent of this survey is to properly size and identify components our computational |
| **General** | **General** | ||
| Line 13: | Line 12: | ||
| Q: Regarding your primary computational needs, identify your discipline and/or department. | Q: Regarding your primary computational needs, identify your discipline and/or department. | ||
| - | Q: In two sentence | + | Q: In two sentences |
| - | Q: If you do not currently use HPC do you anticipate using it in within next three years? | + | Q: If you do not currently use HPC, do you anticipate using it in within next three years? |
| - | Q: IF you currently use HPC (local or remote), please give brief description (ie I use Jetstream/ | + | Q: If you currently use HPC (local or remote), please give brief description (ie I use Jetstream/ |
| - | Q: Does your Department anticipate expanding/ | + | Q: Does your Department anticipate expanding/ |
| - | Q: Do you currently use or anticipate the need for large storage capacity? What estimated size three years from now for your group? | + | Q: Do you currently use or anticipate the need for large storage capacity? |
| Q: Do you currently use or anticipate the need for high interprocess communications, | Q: Do you currently use or anticipate the need for high interprocess communications, | ||
| Q: Are your typical compute intensive applications CPU bound (in memory computations only), IO bound (perform lots of reads and writes to storage), or both? | Q: Are your typical compute intensive applications CPU bound (in memory computations only), IO bound (perform lots of reads and writes to storage), or both? | ||
| + | |||
| + | Q: Will be you be anticipating, | ||
| Q: Most applications will be floating point operations, however do you use integer based computing? | Q: Most applications will be floating point operations, however do you use integer based computing? | ||
| - | Q: How would you like to participate in ongoing developments of our HPCC (ie email/list, meetings, named contact ...)? | + | Q: How would you like to participate in ongoing developments of our HPC (ie email/list, meetings, named contact ...)? |
| - | Q: Would you support some level of "contribution" based on your groups' | + | Q: Would you support some level of periodic |
| **Technical** | **Technical** | ||
| - | Q: What Linux distribution | + | Q: What Linux distributions |
| Q: What Scheduler are you using or familiar with (ie SGE, Torque, BPS, Openlava ...)? | Q: What Scheduler are you using or familiar with (ie SGE, Torque, BPS, Openlava ...)? | ||
| - | Q: What commercial grade compiler are you using or anticipate needing (ie Intel' | + | Q: What commercial grade compiler are you using or anticipate needing (ie Intel' |
| Q: What commercial software do you use or anticipate using (ie SAS, Stata, Matlab, Mathematica, | Q: What commercial software do you use or anticipate using (ie SAS, Stata, Matlab, Mathematica, | ||
| Line 45: | Line 46: | ||
| Q: What open source software do you use or anticipate using (ie Gromas, Amber, Lammps, R ...) | Q: What open source software do you use or anticipate using (ie Gromas, Amber, Lammps, R ...) | ||
| - | Q: For the two questions above: how important is it to you your job can restart where it left off when a crash happens (versus starting over)? | + | Q: For the two questions above: how important is it to you that your job can restart where it left off when a crash happens (versus starting over)? |
| - | Q: Do you run parallel programs using MPI across multiple compute nodes, or anticipate | + | Q: Do you run parallel programs using MPI across multiple compute nodes, or anticipate |
| Q: Do you run forked programs (threads) confined to a single compute node? Which application (ie Gaussian, Autodock ...)? | Q: Do you run forked programs (threads) confined to a single compute node? Which application (ie Gaussian, Autodock ...)? | ||
| - | Q: For the two question above: how many cpu cores/threads per job typically, typical total memory requirements per cpu core/tread. | + | Q: For the two question above: how many cpu cores or threads per job typically, typical total memory requirements per cpu core or thread (or total per job). |
| - | Q: For serial jobs (one program requires | + | Q: For serial jobs (one program requires |
| - | Q: For all types of jobs above, how many concurrent jobs running, for how long per individual job, in a typical research month. | + | Q: For all type of jobs listed |
| Q: Estimate the size of your home directory needs for actively used files assuming archive space is available (ie 100G, 1T ...). | Q: Estimate the size of your home directory needs for actively used files assuming archive space is available (ie 100G, 1T ...). | ||
| - | Q: Estimate your archive space needs at the end of years (ie 1T, 5T ...). | + | Q: Estimate your archive space needs at the end of three years (ie 1T, 5T ...). |
| - | Q: Estimate the need for large scratch space requirements (ie jobs that write 100-500G/job). | + | Q: Estimate the need for large scratch space requirements (ie jobs that write 100-500G |
| Q: For home and scratch space areas, do you require a parallel, clustered file system or are simple NFS mounts sufficient? | Q: For home and scratch space areas, do you require a parallel, clustered file system or are simple NFS mounts sufficient? | ||
| - | A: Give us any feedback we did not ask for. | + | **Optional** |
| + | |||
| + | A: Give us any feedback we did not ask for or forgot to ask m( | ||
cluster/159.1490810652.txt.gz · Last modified: (external edit)
