Table of Contents


Back

HP HPC

Notes for the cluster design conference with HP.

“do later” means we tackle after the HP on site visit.

S & H

Network

Basically …

We are planning to ingest our Dell cluster (37 nodes) and our Blue Sky Studios cluster (130 nodes) into this setup, hence the approach.

Netmask is, finally, 255.255.0.0 (excluding public 129.133 subnet).


Update with the following: Hi Shanna, ok, i see that, so globally lets go with

eth0 192.168.102.x/255.255.0.0
eth1 10.10.102.x/255.255.0.0 (data, need to reach netapp filer at 10.10.0.y/255.255.0.0)
eth2 129.133.1.226 public (wesleyan.edu)
eth3 192.168.103.x/255.255.255.0 ipmi (or over eth0?)
eth4 192.168.104.x/255.255.255.0 ilo (or over eth0?)
ib0 10.11.103.x/255.255.255.0 ipoib (data)
ib1 10.11.104.x/255.255.255.0 ipoib (data, not used at the start)

where x=254 for head and x=10(increment by 1) for nodes n1-n32

does that work for you? i'm unsure how ilo/ipmi works but it could use eth0.

-Henk


Infiniband

HP Link

Configuration, fine tuning, identify bottlenecks, monitor, administer. Investigate Voltaire UFM?

DM380G7

HP Link (head node)
External Link video about hardware

StorageWorks MSA60

HP Link (storage device)

SL2x170z G6

HP Link (compute nodes)

Misc

Other

ToDo

All do later. After HP cluster is up.


Back