This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
cluster:93 [2011/01/07 20:38] hmeij |
cluster:93 [2011/01/07 20:52] hmeij |
||
---|---|---|---|
Line 19: | Line 19: | ||
* A single queue is preferred. | * A single queue is preferred. | ||
* Data (NFS) was to be served up via a secondary gigabit ethernet switch, hence not compete with administrative traffic. | * Data (NFS) was to be served up via a secondary gigabit ethernet switch, hence not compete with administrative traffic. | ||
- | * (With the HP solution we will actually route data (NFS) traffic over the infiniband switch, a practice called IPoIB) | + | * (With the HP solution we will actually route data (NFS) traffic over the infiniband switch |
* Linux or CentOS as operating system. | * Linux or CentOS as operating system. | ||
* Flexible on scheduler (options: Lava, LSF, Sun Grid Engine) | * Flexible on scheduler (options: Lava, LSF, Sun Grid Engine) | ||
Line 26: | Line 26: | ||
===== Performance ===== | ===== Performance ===== | ||
+ | |||
+ | During the scheduled power outage of December 28th, 2010, some benchmarks were performed on old and new clusters. | ||
+ | |||
+ | In short using linpack (More about [[http:// | ||
+ | |||
+ | * greentail' | ||
+ | * petaltail/ | ||
+ | * petaltail/ | ||
+ | * so the total is 570 gigaflops, but you'd never want to run across both switches simultaneously | ||
+ | * sharptail' | ||
+ | * never got quite all the nodes working together, not sure why | ||
+ | |||
===== Home Dirs ===== | ===== Home Dirs ===== |