User Tools

Site Tools


cluster:125

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:125 [2014/02/25 14:25]
hmeij [What May Also Change?]
cluster:125 [2014/02/26 15:32] (current)
hmeij [What Changes?]
Line 35: Line 35:
 #BSUB -R "rusage[mem=X]" #BSUB -R "rusage[mem=X]"
 </code> </code>
 +
 +  * How do I find out how much memory I'm using? ssh node_name top -u your_name -b -n 1
  
 Gaussian: Gaussian:
Line 51: Line 53:
   * You can use the new queue ''mw256fd'' just like ''hp12'' or ''imw''   * You can use the new queue ''mw256fd'' just like ''hp12'' or ''imw''
   * For parallel programs you may use OpenMPI or MVApich, use the appropriate wrapper scripts to set up the environment for mpirun   * For parallel programs you may use OpenMPI or MVApich, use the appropriate wrapper scripts to set up the environment for mpirun
 +    * On ''mw256'' you may run either flavor of MPI with the appropriate binaries.
   * On ''mwgpu'' you must use MVApich2 when running the GPU enabled software (Amber, Gromacs, Lammps, Namd).   * On ''mwgpu'' you must use MVApich2 when running the GPU enabled software (Amber, Gromacs, Lammps, Namd).
-  * On ''mw256'' you may run either flavor of MPI with the appropriate binaries.+
  
 Scratch: Scratch:
Line 75: Line 78:
  
   * if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue)   * if there is no ‘sharing’ required the hyper-threaded node performs the same (that is the operating systems presents 16 cores but only up to 8 jobs are allowed to run, lets say by limiting the JL/H parameter of the queue)
- if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many+  * if there is ‘sharing’ jobs take a 44% speed penalty, however more of them can run, twice as many
  
 So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16. So it appears that we could turn hyperthreading on and despite the nodes presenting 16 cores we could limit the number of jobs to 8 until the need arises to run many small jobs and then reset the limit to 16.
cluster/125.1393356336.txt.gz · Last modified: 2014/02/25 14:25 by hmeij