This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:149 [2016/08/29 12:26] hmeij07 [http://www.netapp.com] |
cluster:149 [2016/12/06 20:13] (current) hmeij07 [Supermicro] |
||
---|---|---|---|
Line 6: | Line 6: | ||
- | ==== The problem | + | ==== The Storage Problem |
In a commodity HPC setup deploying plain NFS, bottle necks can develop. | In a commodity HPC setup deploying plain NFS, bottle necks can develop. | ||
Line 65: | Line 65: | ||
That leaves 2 of 4 SFP+ ports that we can step down from 40G/10G Ethernet via two cables X6558-R6 connecting SFP+ to SFP+ compatible ports. Meaning, hopefully we can go from FAS2554 to our Netgear GS724TS or GS752TS 1G Ethernet SFP Link/ACT ports. That would hook up 192.168.x.x and 10.10.x.x to the FAS2554. We need, and have, 4 of these ports on each switch available; once the connection is made /home can be pNFS mounted across. | That leaves 2 of 4 SFP+ ports that we can step down from 40G/10G Ethernet via two cables X6558-R6 connecting SFP+ to SFP+ compatible ports. Meaning, hopefully we can go from FAS2554 to our Netgear GS724TS or GS752TS 1G Ethernet SFP Link/ACT ports. That would hook up 192.168.x.x and 10.10.x.x to the FAS2554. We need, and have, 4 of these ports on each switch available; once the connection is made /home can be pNFS mounted across. | ||
+ | |||
+ | Suggestion: | ||
Then ports e0a/e0b, green RJ45 ports to the right, connect to our cores switches (public and private) to move content from and to the FAS2554 (to the research labs for example). Then we do it again for the second controller. | Then ports e0a/e0b, green RJ45 ports to the right, connect to our cores switches (public and private) to move content from and to the FAS2554 (to the research labs for example). Then we do it again for the second controller. | ||
Line 70: | Line 72: | ||
So we'd have, lets call the whole thing " | So we'd have, lets call the whole thing " | ||
- | * 192.168.102.200/ | + | * 192.168.102.200/ |
- | * 10.10.102.200/ | + | * 10.10.102.200/ |
* 129.133.22.200/ | * 129.133.22.200/ | ||
* 129.133.52.200/ | * 129.133.52.200/ | ||
Line 78: | Line 80: | ||
Then configure //second// controller. | Then configure //second// controller. | ||
- | Can we bond hpcfiler01-eth3.wesleyan.edu and hpcfiler02-eth3.wesleyan.edu together to their core switches? (same question for eth4's wesleyan.local) | + | Q: Can we bond hpcfiler01-eth3.wesleyan.edu and hpcfiler02-eth3.wesleyan.edu together to their core switches? (same question for eth4's wesleyan.local) |
+ | |||
+ | A: No you cannot bond across controllers. | ||
+ | |||
+ | |||
Awaiting word from engineers if I got all this right. | Awaiting word from engineers if I got all this right. | ||
Line 90: | Line 96: | ||
It's worth noting that 5 of these integrated storage servers fits the price tag of a single Netapp FAS2554 (the 51T version). So, you could buy 5 and split out /home into home1 thru home5. 200T, everybody can get as much disk space as needed. Distribute your heavy users across the 5. Mount everything up via IPoIB and round robin snapshot, as in, server home2 snapshots home1, etc. | It's worth noting that 5 of these integrated storage servers fits the price tag of a single Netapp FAS2554 (the 51T version). So, you could buy 5 and split out /home into home1 thru home5. 200T, everybody can get as much disk space as needed. Distribute your heavy users across the 5. Mount everything up via IPoIB and round robin snapshot, as in, server home2 snapshots home1, etc. | ||
- | Elegant, simple, and you can start smaller and scale up. | + | Elegant, simple, and you can start smaller and scale up. We have room for 2 on the QDR Mellanox switch (and 2 up to 5 on the DDR Voltaire switch). Buying another QDR Mellanox adds $7K for an 18 port switch. IPoIB would be desired if we stay with Supermicro. |
+ | |||
+ | What's even more desired is to start our own parallel file system with [[http:// | ||
+ | |||
+ | **Short term plan** | ||
+ | |||
+ | * Grab the 32x2T flexstorage hard drives and insert into cottontail' | ||
+ | * Makes for a 60T raw raid 6 storage place (2 hot spares) | ||
+ | * move the sharptail /snapshots to it (remove that traffic from file server) | ||
+ | * Dedicate greentail' | ||
+ | * Remove / | ||
+ | * Extend /sanscratch form 27T to 37T | ||
+ | * Dedicate sharptail' | ||
+ | * Keep old 5T /sanscratch as backup, idle | ||
+ | * Remove 15T / | ||
+ | * Extend /home for 10T to 25T | ||
+ | * Keep 7T /archives until those users graduate, move to Rstore | ||
+ | |||
+ | **Long term plan** | ||
+ | * Start a BeeGFS storage cluster | ||
+ | * cottontail as MS (management server) | ||
+ | * sharptail as AdMon (monitor server) and proof of concept storage OSS | ||
+ | * pilot storage on idle / | ||
+ | * also a folder on cottonttail:/ | ||
+ | * n38-n45 (8) as MDS (metadata servers, 15K local disk, no raid) | ||
+ | * Buy 2x 2U Supermicro for OSS (object storage servers for a total of 80T usable, raid 6, $12.5K) | ||
+ | * Serve up BeeGFS file system using IPoIB | ||
+ | * Move /home to it | ||
+ | * Backup to older disk arrays | ||
+ | * Expand as necessary | ||
===== Loose Ends ===== | ===== Loose Ends ===== | ||
Line 98: | Line 133: | ||
* backup Openlava scheduler for cottontail.wesleyan.edu | * backup Openlava scheduler for cottontail.wesleyan.edu | ||
* backup replacement for greentail.wesleyan.edu (in case it fails) | * backup replacement for greentail.wesleyan.edu (in case it fails) | ||
+ | |||
+ | Bought. | ||
+ | --- // | ||
Warewulf golden image it as if it is greentail. | Warewulf golden image it as if it is greentail. |