Rebuild for Lower Power and heat

riggieri

Dabbler
Joined
Aug 24, 2018
Messages
42
Morning Everyone.

Been running my Freenas for the past 18 months solidly in a production environment. I want to switch out the system board and CPU for something that has a much lower idle power draw, and therefore way less heat. The problem is, I have all of my machine room in a 8'x4' space in a rack, cooled by a 15,000BTU AC unit, and it has trouble keeping up when we are pushing the systems.

What we have now:
Supermicro X8
Dual L5640
96GBRAM
Chenbro 48 Drive Case
Chelsio T520
Two LSI HBAs
Pool 1 7x6 RaidZ2 Vdevs 8TB (Vault and Backups. set to spin down after 60 minutes)

Supermicro SC847 45 Drive Enclosure
Pool2: 14x2 Mirror 4TB Drives (Main Production Pool, all large video files)
Pool3: 2x1TB SSD (Holds FCPx Libraries, tons of small files, holds system dataset)
Pool4: 3x4 RaidZ1 6TB (Additonal Production Pool, for less resource driven projects, all large video files)

Additionally in that room is all our networking gear, a Synology DS1815+, two Mac Pros(2009 and 2019), and 2 UPSs, plus some random hardware raids.

I know I won't be able to get rid of all the heat generated by the drives, but I am thinking since my Freenas box is mostly at idle, I could loose a lot of heat upgrading the system board and CPU. I originally had planned on getting a Xeon E5 v2 system but was able to score a good deal on a CPU/Board combo for a v3 System, so what I am thinking about replacing the system with is:

Xeon e5-1650 v3
SuperMicro X10SRL-F

From my understanding, this should reduce my idle power draw by a significant amount, between the chipset and CPUs. And I will be gaining in single core and multicore performances without losing any PCIe lanes. Performance wise I am already mostly maxing out 10GBe, so I am not really looking for more performance. My question is about RAM.

I currently have 96GB of RAM. Moving to a V3 CPU I obviously need to move to DDR4, but can I get by with 64GB of RAM? or should I spend the money and get 128GB. my ARC size hovers around 20GB, but I do see it spike up to 76GB at times. With mainly large video files, I don't feel like I see to much benefit from the ARC, but I do see a ARC Hit Ration above 90% most of the time. But for a cost difference of $100, I guess I should just go with the 128GB.

Thoughts?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
15,000 BTU is 1.25 tons of cooling.

Watts to BTUh is 1w = 3.41BTUh and when the merry go round stops, 15,000 BTU should be able to cool 4.4 kilowatts of power.

If you had two fully consumed 20A 120V power circuits, you would still not be maxxing that out (3.84kW).

The difference in idle power draw between your old and new system isn't going to be anywhere near large enough to make a dent in this -- maybe 100 watts, not more than 200. Your better solution would be to get the A/C serviced and find out why it is underperforming. Our 1.5 ton mini runs at a fraction of its capacity cooling a good sized amount of gear here in the office, sounds like probably more than what you have. What kind of cooling do you actually have? One of the things that's really hard is if you have some sort of conventional A/C that is either on or off. If you have a unit that is significantly oversized, it will cycle on and off frantically alternating between the heat shooting up and the room getting too cold, and this is really hard and will eventually damage your A/C. Our previous system here was a two-stage central that was sized such that the first stage always had to be running but it would bring in the second stage as needed (also cooled the rest of the building). This worked very well but it only lasted around 12 years and required some puttering to make sure air was being zoned correctly. The mini split is awesome because it is constantly variable, runs all the time, and cools just the amount needed.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Not sure I fully agree with the math there. Based on the description, there may be more than two full-loaded circuits in there. I'd attack the problem on both fronts, i.e. demand and supply.

Cooling a server room is every ACs dream (little to no latent cooling needed) so I agree that it's worth looking at whether the CAC is performing or not. Take a look at the evaporator (indoor) coil. If there is ice there, it's likely low on charge. However, the only way to know is to measure the subcooling/superheat at the condenser using a proper test rig. Yesteryear approaches like the soda can test do not need apply. If possible, tune the evaporator to maximize sensible heat removal.

An oversized AC system is definitely not good for the reasons that jgreco mentioned. If it's time to replace the rig, consider switching to a mini-split with a variable-speed compressor system. VS-compressors take advantage of the fact that most of us live in areas where design-days are rare and shoulder-seasons predominate. Modulating the speed down allows the compressor to run longer (helping reduce short cycling) and it also increases the relative surface area of the HX to the refrigerant flow (the HX surface area stays constant, the refrigerant flow is at a fraction). EEVs found in minisplit indoor units also allow the climate in each zone to be controlled much more tightly re: humidity and temperature.

Aside from the AC, I'd start by taking stock what really is needed.
  • Lots of random RAIDs, the odd Synology, etc. all add up to more heat for potentially little to no benefit. For example, if the Synology is used for backups, can it be set to shut off during the day, to boot at night, run the backups, shut down again?
  • Upgrading the likely-long-in-the-tooth 4TB drives may also help reduce the disk count and/or system count (via consolidation, see below).
  • Can all the random RAIDs, the Synology, etc. be consolidated into the two SM cases once bigger drives are in use?
  • Are the SM cases filled with high-efficiency PSUs?
  • With the right choices, replacement disks could also lower the heat (my 10TB He10's run cooler than my old 3TB air-filled HGSTs).
Etc.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
I like to look at watts consumed = heat out. Probably not entirely accurate, but it gets one close enough. Perhaps I did my math incorrectly, but I came up with 0.120KW of heat = 409.457BTU/HR

Lastly, the things that eat power that may be surprising:
Spinning disks.
Inefficient power supplies in non-compute devices; switches, modem, wireless...
UPS losses
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
1kW = 3,412 BTU. two fully loaded 20A circuits (assuming PF=1, 120VAC) = 2.4kW = 8114 BTU

This assumes just two circuits, which is unlikely. Most server rooms have more and the OP described all sorts of other hardware that in aggregate probably gets closer to 3kW.

Per nameplate, the present AC system should be able to handle that; assuming the system is performing well. I’d verify that the filters are clean, ditto the heat exchangers.

Many homeowners can tackle a dirty HX with the right equipment and some training but a pro may be more appropriate. cleaning a spine-fin HX is best left to a pro if it’s bigger than those found inside GE refrigerators.

You definitely need a pro to verify the superheat / subcoooling and adjust charge as necessary. Depending on the performance of your other AC systems, it likely makes sense to review them also. That’s the upside associated with central AC equipment vs. white goods, CAC equipment still features test ports.
 
Top