User Tools

Site Tools


cluster:123

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cluster:123 [2013/10/10 21:02]
hmeij [Replace Dell Racks]
cluster:123 [2013/10/11 13:15]
hmeij
Line 1: Line 1:
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
- 
-To be send to Dave Baird. 
  
 ==== Replace Dell Racks ==== ==== Replace Dell Racks ====
Line 9: Line 7:
 Subtitle: A win-win solution proposed by Physical Plant and ITS Subtitle: A win-win solution proposed by Physical Plant and ITS
  
-Once upon a time, back in 2013,  two Dell racks full of compute nodes, sat noisily chewing away energy on the 5th floor of Science Tower.  They sucked in nicely cooled air from the floor spewing it out the back of the racks at 105-110 degrees (F).  They were giving the three Liebert cooling towers a run for their BTUs.  So much so that if one failed the Dell racks needed to be powered down to avoid the data center reaching temperatures beyond 95 degrees (F). The Dell racks were in a foul mood ever since that last event not too long ago. And so, day after day, they consumed lots of BTUs, and with the ample supply of Watts coming from their L6-30 roots, converted it all into heat. Tons of heat, making life lousy for the Liebert family. Oh, and they performed some computational work too, but if even they did not, the energy consumption remained the same. That's a fact. They were 6 years old and determined to make it to 12. So the story goes.+Once upon a time, back in 2013,  two Dell racks full of compute nodes, sat noisily chewing away energy on the 5th floor of Science Tower.  They drew in nicely cooled air from the floor spewing it out the back of the racks at 105-110 degrees (F).  They were giving the three Liebert cooling towers a run for their BTUs.  So much so that if one failed the Dell racks needed to be powered down to avoid the data center reaching temperatures beyond 95 degrees (F). The Dell racks were in a foul mood ever since that last eventnot too long ago. And so, day after day, they consumed lots of BTUs, and with the ample supply of Watts coming from their L6-30 roots, converted it all into heat. Tons of heat, making life lousy for the Liebert family. Oh, and they performed some computational work too, but if even they did not, the energy consumption remained the same. That's a fact. They were 6 years old and determined to make it to 12. So the story goes.
  
 The Dell racks contain 30 compute nodes, two UPS units, two disks arrays and two switches. We have measured 19 nodes power consumption (pulling one of the dual power units out) with a Kill-A-Watt meter for over 775+ total hours. The mean power consumption rate is 418.4 watts. That totals to 109,956 KwH/year in power consumption ((watts/1000 Kw per hour) * 24 hours * 365 days * 30 servers). This is a low water mark, it only takes into account the compute nodes but that will be the majority of heat producers. We also measured one rack's consumption at the utility panel and Peter's calculation yields 126,000 KwH/year which can be considered a high water mark. The Dell racks contain 30 compute nodes, two UPS units, two disks arrays and two switches. We have measured 19 nodes power consumption (pulling one of the dual power units out) with a Kill-A-Watt meter for over 775+ total hours. The mean power consumption rate is 418.4 watts. That totals to 109,956 KwH/year in power consumption ((watts/1000 Kw per hour) * 24 hours * 365 days * 30 servers). This is a low water mark, it only takes into account the compute nodes but that will be the majority of heat producers. We also measured one rack's consumption at the utility panel and Peter's calculation yields 126,000 KwH/year which can be considered a high water mark.
Line 35: Line 33:
   * There are enough Infiniband ports available to put all new hardware nodes on such a switch (add cards and cables cost for each node)   * There are enough Infiniband ports available to put all new hardware nodes on such a switch (add cards and cables cost for each node)
   * The internal disks on each node need to be of a high speed (10K or better) and of a certain size (300 GB or larger) mimicking the Dell disk arrays (adds costs)   * The internal disks on each node need to be of a high speed (10K or better) and of a certain size (300 GB or larger) mimicking the Dell disk arrays (adds costs)
-  * we maybe able to add two more nodes by switching to a more exapnsive lower wattage CPU (and remain within butget as well aas below the 50% energy consumption threshold as compared with Dell's consumption.+  * we maybe able to add two more nodes by switching to a more exapansive lower wattage CPU (and remain within budget as well as below the 50% energy consumption threshold as compared with Dell's consumption. 
 +    * accomplished by switching from 8 core 2650v2 (130 watt) 2.6 ghz CPU to 10 core 2660v2 (95 watt) 2.2 ghz CPU
  
 But it is all very doable within a budget of $45-$50K. And it can be the solution for: But it is all very doable within a budget of $45-$50K. And it can be the solution for:
  
   * replace Dell's racks functions and match or exceed its performance   * replace Dell's racks functions and match or exceed its performance
-  * seriosuly reduce energy consumption benefiting Physical Plants bottom line+  * seriously reduce energy consumption benefiting Physical Plant'bottom line
   * allow ITS to treat the third Liebert cooling tower as backup/standby generating more energy savings   * allow ITS to treat the third Liebert cooling tower as backup/standby generating more energy savings
   * being way green   * being way green
cluster/123.txt · Last modified: 2013/10/23 18:52 by hmeij