This is an old revision of the document!
Soon (Feb/2014), we'll have to power down the Dell Racks and grab one L6-30 circuit supplying power to those racks and use it to power up the new Microway servers.
That leaves some spare L6-30 circuits (the Dell racks use 4 each), so we could contemplate grabbing two and powering up two more shelves of the Blue Sky Studio hardware. That would double the Hadoop cluster and the bss24
queue when needed (total of 100 job slots), and offer access to 1.2 TB of memory. This hardware is generally powered off when not in use.
The new Microway hardware is identical to the GPU-HPC hardware we bought previously minus the GPUs. A total of 8 1U servers will offer
mw256fd
fd
of mw256fd
queue name. It is to be used just like ehwfd
.bss24
). /home
and /sanscratch
are served IPoIB.Queues:
mw256fd
but 28 on mw256
because the GPUs also need access to cores (4 per node for now).Gaussian:
#BSUB -n X (where X is equal to or less than the max jobs per node) #BSUB -R "span[hosts=1]"
MPI:
mw256fd
just like hp12
or imw
mwgpu
you must use MVApich2 when running the GPU enabled software (Amber, Gromacs, Lammps, Namd).mw256
you may run either flavor of MPI with the appropriate binaries.Scratch:
mw256fd
sport a 15K hard disk and /localscratch is 175 GB (replacing the ehwfd
functionality).Savings:
Workshop:
mw256fd
has been deployed.