Building new vs. buying an old server

Status
Not open for further replies.

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
I just bought a used wd arkeia RA6300 on ebay for $1,600. It cost about $45,000 brand new just a couple of years ago. I couldn't even buy the 48TB of drives that it came with for $1,600.
I was totally convinced that I should build my own server, but now I'm totally convinced that buying a used rackmount is the way to go. The thing is a MONSTER. Dual xeon cpus, 92GB of ECC RAM, 48TB of WD REs in 12 hotswap bays, 120GB SSD, dual PSUs, quad NIC, IPMI for managing it, etc. I can't believe I was thinking about building something that would have cost me twice as much and would have been way outclassed by this thing.
The noise isn't bad, either, surprisingly. I put it in a closet and I can't hear it with the door closed.

I just finished installing ESXI, FreeNas 9.10, and Ubuntu Snappy (for all of my docker containers that will run the server apps). Couldn't be happier!

How did the virtualizing go? I haven't decided yet whether or not to go that route and I need to make the decision soon.
 

cryptyk

Dabbler
Joined
Aug 20, 2016
Messages
17
Virtualizing is not straightforward. There's a LOT to be configured to make sure everything is working correctly and you should have a REALLY strong grasp on FreeNAS, ESXi, networking, and virtualization in general before you try it. Once it's working, though, it's pretty amazing.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Wow, this cut way down on the noise, especially at low load. I can't believe those 3 fans were not under any kind of control. Temps are much higher under max load, up to 81C. I'd like to know what the difference is between this and your temp of 60C. A few ideas:

1) Settings - My system thinks 79C is the cutoff for 'high', and 89C is 'critical'. I'm using the most conservative fan setting in the bios.
2) Do you have fans attached directly to the CPU heat sinks? I don't.
3) What are you calling "max load"? I'm running a stress test using mprime.

Do any of these strike you as a possible difference?
1. no idea
2. no.
3. I *think* max load referred to the system running memorytests.. I cannot recall I put it on a pure CPU stress test.

What system did you end up getting? (sorry Im too lazy to look if the info was provided)
For the sake of comparison - my box was filled with dual L5630 cpu's. They are comparatively weak, and low power.
 

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
1. no idea
2. no.
3. I *think* max load referred to the system running memorytests.. I cannot recall I put it on a pure CPU stress test.

What system did you end up getting? (sorry Im too lazy to look if the info was provided)
For the sake of comparison - my box was filled with dual L5630 cpu's. They are comparatively weak, and low power.

I ended up getting this: http://www.ebay.com/itm/142046206241?_trksid=p2057872.m2749.l2649&ssPageName=STRK:MEBIDX:IT

Supermciro 847E16-R1400LPB SuperChassis
2 SAS2 backplane: Front: SAS2-846EL1 Rear: SAS2-826EL1
Supermicro X8DTN+
Dual Intel Xeon E5645 2.26Ghz Hex Core CPU [actually 2.4ghz, listing was in error]
72GB (18x 4GB DDR3 ECC REG Memory)
LSI 9211-8i HBA JBOD card
IPMI remote management upgrade card SIMPL-3+ Dedicated IPMI 2.0 included
Dual 1400Watt power supply

2x 4TB HGST deskstar 7200 RPM
15x 2TB HGST ultrastar 7200 RPM (used)
2x Kingston 32GB SSD
HP G2 36U rack
2x APC 2200 VA UPS

Memory tests are a poor test for max heat. MPrime / Prime95 is what you want to use for that. I think you're going to see a huge difference in temps. This is very important--You can't be doing mods that affect cooling without a good CPU stress test.

I started with doing everything you suggested (removing front fans, covering rear drive base, and covering side vents). I found that there was not much difference after uncovering the rear bays, and CPU temps + fan throttling improved after uncovering the side vents. Though I did not do any stress testing or temp monitoring of the front drives. So at the moment, the only modification I'm running is the removal of the 3 front fans that were powered from the backplane. The 4 that remain were already powered by the motherboard.

I'll say it again... I can't believe that they designed it this way--I think it must have been an oversight. What good is it to have fan profiles if some fans are not controlled?

Another note: I strongly suspect that the fans are doubled up for redundancy, and removing one row would not impact performance much as long as they all function.

If temps are desired to be lower than what I'm getting, I think changing the profile would be adequate. A more extreme option would be to put all 7 fans back in place, and get them all throttled by the motherboard. This would require additional hardware if the motherboard can't support the power.
 
Last edited:

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
Virtualizing is not straightforward. There's a LOT to be configured to make sure everything is working correctly and you should have a REALLY strong grasp on FreeNAS, ESXi, networking, and virtualization in general before you try it. Once it's working, though, it's pretty amazing.

I think I'm going to give this a try. I'm by no means an expert in virtualization or networking, but I did deploy a Eucalyptus private cloud during my master's work, and my neighbor does this sort of thing for a living and has offered help.

I've realized that I have a problem right off the bat, though: If I want to install ESXi onto redundant storage (which obviously I do, right?), then I need a RAID controller, which I won't have after flashing my LSI card to IT mode. I've read a few articles on virtualizing FreeNAS but haven't seen this obvious conflict addressed. What is the right way to address this?

e: Sounds like a USB thumb drive is a good option. RAID is not needed because the boot media is only used on boot.

Another question (for everyone, not about virtualization): After I bought the pair of new 4TB drives, I found the deal on used 2TB drives and bought 15 of them. Does it make sense to put the odd-duck 4TB drives in my FreeNAS array initially? Full drive info below.

HGST Deskstar NAS H3IKNAS40003272SN (0S03664) 4TB 7200 RPM 64MB Cache SATA 6.0Gb/s
HGST/Hitachi Ultrastar 7K3000 HUA723020ALA641 2TB 7200 RPM 64MB Cache SATA 6.0Gb/s
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Dual Intel Xeon E5645
IIRC these are about twise as heat intensive as the L5630. I found some test a while ago. Some would argue it is better to have a CPU that get sshit done, and idles for the rest of the time.
The point of the L- specced CPU's is to limit their output at full throttle.
2x 4TB HGST deskstar 7200 RPM
15x 2TB HGST ultrastar 7200 RPM (used)
This might have some distinct impact comparing our experiences. I used 13 reds/green drives, located all in the front.
Memory tests are a poor test for max heat. MPrime / Prime95 is what you want to use for that. I think you're going to see a huge difference in temps. This is very important--You can't be doing mods that affect cooling without a good CPU stress test.

I'm aware of not put all in on a memory test for cpu maximization. For the general sake of trying the hardware, I'd agree. For my usecase, where I've never seen above 20% cpu use, like ever-ever... I'm not too concerned.
However, I do not believe I'd see much different temperatures. Nothing close to the numbers you are posting, simply because of the quite different hardware circumstances our systems possess.

But most importantly - I appreciate you actually think for yourself, and tried out the suggested mods with consideration. I've been waiting to get roasted for posting such blasphemi, as modifying a rack case's cooling.

Another note: I strongly suspect that the fans are doubled up for redundancy, and removing one row would not impact performance much as long as they all function.
In my box, 3 fans were powered from the front backplane and three fans from the back backplane. The last fan was connected to the motherboard.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
@taylornate btw - when you're discarding any differences from covering the back and side ventilations, during what test did you draw that conclusion?
I'll add in advance, the benefits I experienced were related to drive temperature and an 'empirical pragmatic' steep increase in air pressure in front of the 24 caddies (where my drives were mounted). Ie, pressure was not lost to the side vents, nor the rear backplane which - CLEARLY - was the case prior the mod.
 

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
I acknowledge that our hardware is different that that probably makes some contribution, even a significant one in the case of the CPUs, but I'm absolutely sure that the difference in stress testing is the biggest difference. A memory check is barely going to touch the CPU. If you are comfortable with that for your own use, that's fine, but calling it "max load" is incorrect. I don't mean to harp on you for your own decisions, but it just needs to be made clear for anyone who comes along and sees this later--modifying cooling without doing a proper stress test is dangerous.

However, I do not believe I'd see much different temperatures. Nothing close to the numbers you are posting, simply because of the quite different hardware circumstances our systems possess.

Like I said, you barely touched your CPU with that memory test. I don't know what your background is, but it seems naive to me that you would acknowledge that you don't care to test beyond 20% CPU use, but dismiss the temperature differences as due to hardware differences. I think you're wrong about that. Maybe they won't get as hot as mine but significantly higher. Run mprime set to max CPU and see for yourself if you'd like.

I doubt the drives had much to do with it, as they were all sitting idle, but maybe if I'm bored later maybe I will repeat the test with all the drives pulled just far enough out to be disconnected.

In my box, 3 fans were powered from the front backplane and three fans from the back backplane. The last fan was connected to the motherboard.

Oh, ok. In my box, the 3 back fans and the one in front of the PSUs came from the seller connected to the motherboard.

@taylornate btw - when you're discarding any differences from covering the back and side ventilations, during what test did you draw that conclusion?
I'll add in advance, the benefits I experienced were related to drive temperature and an 'empirical pragmatic' steep increase in air pressure in front of the 24 caddies (where my drives were mounted). Ie, pressure was not lost to the side vents, nor the rear backplane which - CLEARLY - was the case prior the mod.

This was during the mprime max CPU stress test. When I undid the covering of the rear bays, there did not seem to be any notable difference to CPU temp or fan speed. Based on the geometry, I would expect air flow across the front drives to increase without the covering if there is any change at all. The only difference the covering would introduce is to divert air flow from the rear drive bays to the CPU, memory, etc. When I uncovered the side vents, I noted that fan speed (noise) decreased significantly, as the CPUs got better air flow. I will admit that this would decrease air flow across the front drives. I'll also admit that I haven't been very concerned with heat from the drives, and I'll have to investigate that further before putting the server into production.

Update re: virtualization...

Looks like I will actually need to pick up a RAID card for a mirrored pair, not necessarily for the ESXi installation, but certainly for the data store. Kind of a bummer, but I think it will be worth it. My current plan is to pick up a lsi 9211-4i which will fit into a 4x PCI slot, leaving my remaining 8x slot for a 10Gbe card. Any comments appreciated.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Looks like I will actually need to pick up a RAID card for a mirrored pair, not necessarily for the ESXi installation, but certainly for the data store. Kind of a bummer, but I think it will be worth it. My current plan is to pick up a lsi 9211-4i which will fit into a 4x PCI slot, leaving my remaining 8x slot for a 10Gbe card. Any comments appreciated.
Sorry if I missed the reasoning for this, but are you trying to run FreeNAS on ESXi? If so, then you would still not need a Hardware Raid for the DataStore.
 

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
Sorry if I missed the reasoning for this, but are you trying to run FreeNAS on ESXi? If so, then you would still not need a Hardware Raid for the DataStore.

Yes, I want to run FreeNAS on ESXi, and I concluded that I would need hardware raid for data store. How can I avoid this?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
You need to pass through a controller to the FreeNAS VM. There are some guides here about doing this. Just be warned: virtualizing FreeNAS is not something that should be considered lightly.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Yes, I want to run FreeNAS on ESXi, and I concluded that I would need hardware raid for data store. How can I avoid this?
*** Disclaimer, most contributors will NOT provide support on this since it is something that is not recommended for inexperienced Users. You may find yourself "out in left field"; so tread lightly when making this decision.

First, go over these significant threads by @jgreco to fully understand the warnings:
Please do not run FreeNAS in production as a Virtual Machine!
Virtually FreeNAS ... an alternative for those seeking virtualization
"Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data

There are multiple threads on the matter in these forums, one that is very in-depth is @joeschmuck 's "My Dream System (I think)" another example is "ESXi Home Server + FreeNas"

On a happy note, there are a few of us who are seeing the growing trend and are running such an environment ourselves (I do). I even inquired about it in "Support for FreeNas on ESXi"
*** Still doesn't mean we are going to go out of our way to support it though, since if one is trying to run such an environment they should already be pretty darn versed not only in FreeNAS and ESXi but a lot more...

Most do actually have a FreeNAS VM, just for testing updates or mucking around (but not used in Production or to hold vital data).
 

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
You need to pass through a controller to the FreeNAS VM. There are some guides here about doing this. Just be warned: virtualizing FreeNAS is not something that should be considered lightly.

I understand passing through to the freenas VM for the pool and boot media, etc. Do you mean this can also be done for the ESXi store? In one of the threads Mirfster linked to, there was an allusion to an idea that the ESXi store can be put through a VM, but it was recommended against and it does seem kind of sketchy to me. Is this what you mean? It looks like ESXi needs its own data store to exist before the first VM can be created.

*** Disclaimer, most contributors will NOT provide support on this since it is something that is not recommended for inexperienced Users. You may find yourself "out in left field"; so tread lightly when making this decision.

First, go over these significant threads by @jgreco to fully understand the warnings:
Please do not run FreeNAS in production as a Virtual Machine!
Virtually FreeNAS ... an alternative for those seeking virtualization
"Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data

There are multiple threads on the matter in these forums, one that is very in-depth is @joeschmuck 's "My Dream System (I think)" another example is "ESXi Home Server + FreeNas"

On a happy note, there are a few of us who are seeing the growing trend and are running such an environment ourselves (I do). I even inquired about it in "Support for FreeNas on ESXi"
*** Still doesn't mean we are going to go out of our way to support it though, since if one is trying to run such an environment they should already be pretty darn versed not only in FreeNAS and ESXi but a lot more...

Most do actually have a FreeNAS VM, just for testing updates or mucking around (but not used in Production or to hold vital data).

Thanks for the reading list. I've already found some of these on my own, but I have a lot more to do. Should be very helpful.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I understand passing through to the freenas VM for the pool and boot media, etc. Do you mean this can also be done for the ESXi store? In one of the threads Mirfster linked to, there was an allusion to an idea that the ESXi store can be put through a VM, but it was recommended against and it does seem kind of sketchy to me. Is this what you mean? It looks like ESXi needs its own data store to exist before the first VM can be created.
First, let me say that you should take what I say with a grain of salt. My answer is going to summarize some of what's commonly done, without doing justice to the pitfalls. Mirfster's list is a great reading list for virtualizing FreeNAS. And I would definitely encourage reading. Before taking on FreeNAS virtualization, I would recommend strong familiarity with both products (FreeNAS and your hypervisor).

With that out of the way, let's make sure we understand passing through the controller. I'm talking about using PCI passthrough to pass a HDD controller to the FreeNAS VM. To the best of my knowledge, you cannot pass through boot media this way (unless you were using a RAID card, and that would be bad), and can only pass non-booting media. Even if you could, you'd still need some kind of datastore outside of FreeNAS to store the FreeNAS VM's config (VMX) files, so it's really just easier to set up a small SSD to act as the ESXi boot volume and datastore for FreeNAS VM (both VMDK and VMX).

Once you get FreeNAS booted, FreeNAS will have direct control over the disks, and can present an iSCSI or NFS share to ESXi, and ESXi can use that as the datastore for the other VMs. I would say that this is what most people are doing when they "virtualize FreeNAS", and it's a pretty easy way to leverage both ZFS and VMware on one physical server. In an enterprise environment, you would not want to do this, because best practices dictate that your storage server is stand alone (you want to minimize the risk that a problem in software X (jails, VMs, etc) brings down your storage server by not letting software X run on your storage server).
 

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
First, let me say that you should take what I say with a grain of salt. My answer is going to summarize some of what's commonly done, without doing justice to the pitfalls. Mirfster's list is a great reading list for virtualizing FreeNAS. And I would definitely encourage reading. Before taking on FreeNAS virtualization, I would recommend strong familiarity with both products (FreeNAS and your hypervisor).

Yeah, I plan to read the full manual for FreeNAS. I haven't looked at the ESXi documentation yet, but I suspect that will be much bigger.

With that out of the way, let's make sure we understand passing through the controller. I'm talking about using PCI passthrough to pass a HDD controller to the FreeNAS VM. To the best of my knowledge, you cannot pass through boot media this way (unless you were using a RAID card, and that would be bad), and can only pass non-booting media. Even if you could, you'd still need some kind of datastore outside of FreeNAS to store the FreeNAS VM's config (VMX) files, so it's really just easier to set up a small SSD to act as the ESXi boot volume and datastore for FreeNAS VM (both VMDK and VMX).

Yes, this is what I meant. My question of hardware RAID was for the ESXi datastore and perhaps ESXi boot volume. If I didn't have a redundant pair for that, it seems like it would be the Achilles heel of the system.

When you say you cannot pass through boot media, do you mean FreeNAS boot media or hypervisor boot media?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
My question of hardware RAID was for the ESXi datastore and perhaps ESXi boot volume. If I didn't have a redundant pair for that, it seems like it would be the Achilles heel of the system.
Ah, on that part then it would be prudent to use Hardware Raid.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Run mprime set to max CPU and see for yourself if you'd like.
Unfortunately my freenas system is not hosted in the aforementioned box at present.
It is tempting to fire it up once more, to ..at least settle a point.

This was during the mprime max CPU stress test. When I undid the covering of the rear bays, there did not seem to be any notable difference to CPU temp or fan speed. Based on the geometry, I would expect air flow across the front drives to increase without the covering if there is any change at all. The only difference the covering would introduce is to divert air flow from the rear drive bays to the CPU, memory, etc. When I uncovered the side vents, I noted that fan speed (noise) decreased significantly, as the CPUs got better air flow. I will admit that this would decrease air flow across the front drives. I'll also admit that I haven't been very concerned with heat from the drives, and I'll have to investigate that further before putting the server into production.

It is increasingly evident that our tests and optimization goals have been different.
I think it is pretty interesting your experiences diverts from mine. For example I did not notice any difference in fan speed from covering/uncovering any of the panels. The only difference I found was a massive difference following the diverged airflow, causing the most air to pass the front drives. Which is to my optimization goals - great success.
I spent more time on monitoring drive temps (from a fully loaded FreeNAS system - scrub testing) than I spent on the CPU side of thing

My goal of modifying the box was to minimize noise while avoiding cooking the front drives.
I scrambled some numbers to put things support the differences of hardware:
L5630's have a TDP på 40w. and a Tcase of 63c.
E5645's respective numbers are 80w and 76c.
Granted what you stated earlier "My system thinks 79C is the cutoff for 'high', and 89C is 'critical'. I'm using the most conservative fan setting in the bios.", you seem to have a real problem on your hands ://

Like I said, you barely touched your CPU with that memory test. I don't know what your background is, but it seems naive to me that you would acknowledge that you don't care to test beyond 20% CPU use, but dismiss the temperature differences as due to hardware differences. I think you're wrong about that. Maybe they won't get as hot as mine but significantly higher. Run mprime set to max CPU and see for yourself if you'd like.
I'd agree the way I formulated my test it may come across as naive to only <trust> the memory temp test for a true validation of the cooling capacity. I'd agree. I've always used prime95 prior to this case. In retrospect, I can't see why I didn't do it this time. It certainly was available on the USB.

One day I might setup the box in a similar fashion again and redo the test on prime95.
Before that can happen a lot of other stuff needs to be worked out before I give this box a second chance. Since it is loaded with 144GB of RAM and I've invested almost as much into the hardware as I have into shipping and customs, I might as well have the box rot in my non-powered basement for a future attempt.

cheers
 

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
I think it is pretty interesting your experiences diverts from mine. For example I did not notice any difference in fan speed from covering/uncovering any of the panels.

In your case was the fan noise pretty low in both configurations? I think that would account for the difference. In my case of running a CPU stress test, the fans actually throttled pretty high in response to CPU temp, so it makes sense that letting more air to the CPU would make a noticeable difference in fan throttling.

E5645's respective numbers are 80w and 76c.
Granted what you stated earlier "My system thinks 79C is the cutoff for 'high', and 89C is 'critical'. I'm using the most conservative fan setting in the bios.", you seem to have a real problem on your hands ://

I'll have to do some more research on the temps. Some sources are saying that max acceptable core temps are higher than Tcase. If it does turn out that my temps are unacceptable, I have no doubts that adjusting the settings will bring them down.

Ah, on that part then it would be prudent to use Hardware Raid.

Would a 9211-4i be suitable for this?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

taylornate

Dabbler
Joined
Jul 10, 2016
Messages
43
Sure, some would suggest something with a BBU like a 9260. I have used a 9260 as well as a Perc H200 before. Right now am using a "Dual mSATA SSD to 2.5-Inch SATA RAID Adapter" (with 2 x 128GB Samsung mSATAs). It works fine for my needs and I do backup my ESXi and FreeNAS configs so I am not too worried.

Hmm. I like the idea of battery backup, but the 9260 takes up an 8x PCI slot that I want to keep open for 10Gbe. Is battery backup important with RAID1? With dual-redundant UPS?

That adapter you linked looks interesting. The idea there is that I would plug it into a mainboard sata header, right? Does startech have a decent reputation for reliability? It looks like the cost would be similar vs. a used 9211-4i.
 
Status
Not open for further replies.
Top