Table of Contents


Back

Jobs Pending Historic

First some interesting progress graphs from our report to the provost.

Report

cluster-support-july2014.pdf

Total Jobs Submitted

Just because I keep track =), 2 millionth milestone reached in July 2013.

A picture of our total number of job slots availability and cumulative total of jobs processed.

"Date","Type","TSlots","TJobs","CPU Teraflops"
06/01/2007,total,240,0,0.7
06/01/2008,total,240,100000,0.7
05/01/2009,total,240,200000,0.7
03/01/2011,total,496,1000000,2.2
11/01/2012,total,598,1500000,2.5
07/01/2013,total,598,2000000,2.5
06/01/2014,total,790,2320000,7.1

Accounts

Hardware

This is a brief summary description of current configuration. A more detailed version can be found in the Brief Guide to HPCC write up.

Software

There is an extensive list of software installed detailed at this location Software. Some highlights:

Publications

A summary of articles that have used the HPCC (we need work on this!)

Expansion

Our main problem is that of flexibility. Our cores are fixed per node. One or many small memory jobs running on the Microway nodes idles large chunks of memory. To provide a more flexible environment, virtualization would be the solution. Create small, medium and large memory templates and then clone nodes from the templates as needed. Recycle the nodes when not needed anymore to free up resources. This would also enable us to serve up other operating systems if needed (Suse, Ubuntu, Windows).

Several options are available to explore:

These options would require sufficiently sized hardware that than logically can be presented as virtual nodes (with virtual CPU, virtual disk and virtual network on board).

[root@sharptail ~]# for i in `seq 33 45`; do  bhosts -w n$i| grep -v HOST; done
HOST_NAME          STATUS       JL/U    MAX  NJOBS    RUN  SSUSP  USUSP    RSV
n33                ok                 -     32     20     20      0      0      0
n34                ok                 -     32     21     21      0      0      0
n35                ok                 -     32     28     28      0      0      0
n36                ok                 -     32     28     28      0      0      0
n37                ok                 -     32     20     20      0      0      0
n38                closed_Adm         -     32     16     16      0      0      0
n39                closed_Adm         -     32     20     20      0      0      0
n40                ok                 -     32     23     23      0      0      0
n41                ok                 -     32     23     23      0      0      0
n42                ok                 -     32     28     28      0      0      0
n43                ok                 -     32     23     23      0      0      0
n44                ok                 -     32     25     25      0      0      0
n45                ok                 -     32     23     23      0      0      0

[root@sharptail ~]# for i in `seq 33 45`; do  lsload n$i| grep -v HOST; done
HOST_NAME       status  r15s   r1m  r15m   ut    pg  ls    it   tmp   swp   mem
n33                 ok  23.0  22.3  21.9  50%   0.0   0 2e+08   72G   31G  247G
n34                 ok  22.8  22.2  22.1  43%   0.0   0 2e+08   72G   31G  247G
n35                 ok  29.8  29.7  29.7  67%   0.0   0 2e+08   72G   31G  246G
n36                 ok  29.2  29.1  28.9  74%   0.0   0 2e+08   72G   31G  247G
n37                -ok  20.9  20.7  20.6  56%   0.0   0 2e+08   72G   31G  248G
n38                -ok  16.0  10.9   4.9  50%   0.0   0 2e+08 9400M   32G  237G
n39                -ok  20.4  21.1  22.1  63%   0.0   0 2e+08 9296M   32G  211G
n40                 ok  23.0  23.0  22.8  76%   0.0   0 2e+08 9400M   32G  226G
n41                 ok  23.0  22.4  22.2  72%   0.0   0 2e+08 9408M   32G  226G
n42                 ok  23.3  23.5  23.1  70%   0.0   0 2e+08 9392M   32G  236G
n43                 ok  22.8  22.8  22.7  65%   0.0   0 2e+08 9360M   32G  173G
n44                 ok  25.1  25.1  25.0  78%   0.0   0 2e+08 9400M   32G  190G
n45                 ok  23.0  22.9  22.6  64%   0.0   0 2e+08 9400M   32G  226G

Costs

Here is a rough listing of what costs the HPCC generates and who pays the bill. Acquisition costs have so far been covered by faculty grants and the Dell hardware/Energy savings project.


Back