User Tools

Site Tools


cluster:159

This is an old revision of the document!



Back

HPC Survey 2017

“High-Performance Computing,” or HPC, is the application of “supercomputers” to computational problems that are either too large for standard computers or would take too long. HPC typically consists of a system manager server (SMS, also know as login node or master node) and compute nodes. HPC designs may differ but frequently offer high speed networks, large home directories, scratch space, archive space and a job scheduler. A provision application is used to reimage the compute nodes when needed.

Please answer any General or Technical questions that apply to you or your group. The intent of this survey is to properly size and identify components our computational needs. The time horizon is “within the next year”.

General

Q: Regarding your primary computational needs, identify your discipline and/or department.

Q: In two sentence identify your primary research area/project.

Q: If you do not currently use HPC do you anticipate using it in within next three years?

Q: IF you currently use HPC (local or remote), please give brief description (ie I use Jetstream/Stampede/…).

Q: Does your Department anticipate expanding/initiating computational activities in the curriculum, recruiting, current research projects?

Q: Do you currently use or anticipate the need for large storage capacity? What estimated size three years from now for your group?

Q: Do you currently use or anticipate the need for high interprocess communications, frequently referred to as running parallel programs across multiple compute nodes?

Q: Are your typical compute intensive applications CPU bound (in memory computations only), IO bound (perform lots of reads and writes to storage), or both?

Q: Most applications will be floating point operations, however do you use integer based computing?

Q: How would you like to participate in ongoing developments of our HPCC (ie email/list, meetings, named contact …)?

Q: Would you support some level of “contribution” based on your groups' cpu usage on a quarterly basis for HPC maintenance? To build a budget for future use.

Technical

Q: What Linux distribution are you using or familiar with (ie CentOS, Redhat, Suse, Ubuntu …)?

Q: What Scheduler are you using or familiar with (ie SGE, Torque, BPS, Openlava …)?

Q: What commercial grade compiler are you using or anticipate needing (ie Intel's icc/ifort, PGI pgicc/pgfortran …)

Q: What commercial software do you use or anticipate using (ie SAS, Stata, Matlab, Mathematica, IDL/Envi …)? How many concurrent licenses for your group?

Q: What open source software do you use or anticipate using (ie Gromas, Amber, Lammps, R …)

Q: For the two questions above: how important is it to you your job can restart where it left off when a crash happens (versus starting over)?

Q: Do you run parallel programs using MPI across multiple compute nodes, or anticipate doign so? Which MPI flavor (OpenMPI, MVApich, OpenMP …)?

Q: Do you run forked programs (threads) confined to a single compute node? Which application (ie Gaussian, Autodock …)?

Q: For the two question above: how many cpu cores/threads per job typically, typical total memory requirements per cpu core/tread.

Q: For serial jobs (one program requires on cpu core) typical total memory requirement per job.

Q: For all types of jobs above, how many concurrent jobs running, for how long per individual job, in a typical research month.

Q: Estimate the size of your home directory needs for actively used files assuming archive space is available (ie 100G, 1T …).

Q: Estimate your archive space needs at the end of years (ie 1T, 5T …).

Q: Estimate the need for large scratch space requirements (ie jobs that write 100-500G/job).

Q: For home and scratch space areas, do you require a parallel, clustered file system or are simple NFS mounts sufficient?

A: Give us any feedback we did not ask for.


Back

cluster/159.1490810652.txt.gz · Last modified: 2017/03/29 14:04 (external edit)