User Tools

Site Tools


cluster:108


Back

This outdated page replaced by Brief Guide to HPCC

Our Queues

An updated on our queues … — Meij, Henk 2013/09/10 14:43

QueueNr Of NodesTotal GB Mem Per NodeTotal Cores In QueueSwitchHostsNotes
matlab na na na either any host in hp12,elw,emw,imw max jobs 'per user' or 'per host' is 8
stata na na na either any host in hp12,elw,emw,imw max jobs 'per user' or 'per host' is 6
hp12 32 12 256 infiniband n1-n32
elw 07 04 56 gigabit ethernet c19 c20 c21 c22 c23 petal & swallow tails
emw 04 08 32 gigabit ethernet c24 c25 c26 c27
ehw 04 16 32 gigabit ethernet c17 c28 c29 c31
ehwfd 04 16 32 gigabit ethernet c18 c32 c33 c35
imw 16 08 128 infiniband c00-c15
bss24 45 24 90 gigabit ethernet b1-b45 only b21-b45 deployed now, sep13
mathematica 64 infiniband ethernet licensed hosts in hp12
mw256 5 256 120 infiniband n33-n35
mwgpu 5 256 20 infiniband n33-n35 same node as above
  • matlab and stata have license seat limits, please use these queues with that software
  • ehw primarily used by gaussian users needing 16 gb memory
  • ehwfd primarily used by gaussian users also needing local fast disk space (/localscratch is 230 GB)
  • bss24, primarily used by group weirlab, when not in use shut off
    • also our Hadoop cluster (access via head node whitetail
  • imw is primarily used by Amber users, but if empty use it
  • emw & elw have no primary user base nor does hp12 (but offers infiniband)
  • mathematica runs on hosts of queue hp12 that are licensed (limit of 32)
  • mw256 are for jobs requiring large memory access (up to 28 jobs slots per node, up to 230 GB per node)
  • mwgpu is for GPU enabled software only (Amber, Lammps, NAMD, Gromacs, Matlab, Mathematica)
    • be sure to reserve one job slot (cpu core) for each GPU used


Back

cluster/108.txt · Last modified: 2014/02/21 10:25 by hmeij