BUILD Sanity check my FreeNAS build for iSCSI

Status
Not open for further replies.

mullinsj08

Dabbler
Joined
Jan 4, 2016
Messages
13
Hi all!

After extensively trawling through the forums/documentation, I'd like some constructive criticism on my FreeNAS build please. It will primarily be used as iSCSI storage for 2 ESXi hosts which host 10 VM's each but this is will grow over time.

I was using another machine as a FreeNAS iSCSI target before but it proved...unstable and didn't meet the requirements for good performance so I scrapped it.

Here is the build I've come up with...

- Case: Chenbro RM23612 (x12 hot-swap, 7 low profile PCI slots) *Bought*
- PSU: Zippy 2U 760W Redundant
- Mobo: SuperMicro X9DRI-F *Bought*
- CPU: 2x Intel Xeon E5-2609 v2 (2.5GHz, 10MB L3 Cache) [Need 4 PCI-E slots, 3 on CPU1 and the last one is on CPU2]
- RAM: 4x 16GB DDR3 PC3L-12800 Registered ECC (64GB Total)
- HBA: 2x LSI SAS 9207-8i (1st card for bays 1-8 and 2nd card for bays 9-12)
- Hard Drives: x12 WD RE4 2TB SATA
- Network: Intel I350-T4 / Intel X540-T2 (To be added later)

Any comments/suggestions welcomed.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The Chenbro case is probably okay but you probably would have been better with Supermicro. Or at least you should have gotten the Chenbro with an expander backplane, which would have saved you the cost of a second HBA.

Probably avoid the Zippy power supplies, they tend to burn out over time.

The E5-2609 v2 is a relatively poor CPU choice, due to the relatively modest clock speeds. It lacks hyperthreading (so 8 cores total) and lacks turbo boost (so no more than 2.5GHz) and has that relatively small cache.

Don't get 4 x 16GB DDR3 unless the cost of 2 x 32GB DDR3 is substantially more. For a VM server, 64GB is a fairly minimum configuration, and the option to go bigger and be running full memory speeds is nice.

Presumably you're going to do six 2TB mirrors or 4 2TB three-way mirrors with your storage. You're aware that you shouldn't fill your pool, right? So plan on that giving you 4-6TB worth of usable iSCSI space.

There've been problems reported with the X540. Unclear on what the status of that currently is. May wish to avoid the X540 or at least research heavily.
 

mullinsj08

Dabbler
Joined
Jan 4, 2016
Messages
13
Thanks for the feedback jgreco.

I'll have a dig around for a better power supply.

Would the E5-2630 v2 be a better choice perhaps?

You're correct on doing the 4 2TB three-way mirrors. I have read that the pool shouldn't go above 50-60%...4-6TB will be fine for now as I'm mainly aiming for good access times.

The X540 is a 'nice-to-have' that will be added later should I have the money to purchase a 10GbE switch, probably in ~8 months time. Hopefully by then any issues should be resolved.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The problem with all the E5-26xx stuff is that it's either crap or it's expensive or it's both. Cost aside, the 2637 v2 is probably "the" CPU to have for NAS purposes, but it's so hard to tell if there's any value to it for any given application.

I said the hell with Intel E5-26xx and we've mostly been buying E5-1650 v3 gear (3.5GHz, 3.8 turbo, 6 cores, 12 threads, $550).

That kicks the **** out of a pair of E5-2609 v2's.

But I very much like the 3 way mirrors. Bear in mind what I said about memory. You may end up wanting more of it, so the larger DIMM modules are a better choice if the pricing is sensible. At 128GB of RAM, you can add a fair amount of L2ARC which adds a whole lot of speed to frequently accessed stuff. A conventional HDD based array without lots of cache tends to be kind of sluggish. You give ZFS those resources and it'll become very pleasant.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Since this is a VM load, a SLOG (ideally mirrored) should be added.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
While it's nice to have mirrored SLOG, the expense of these devices usually makes it a nonstarter. Mirroring of SLOG used to be required because ZFS used to be unable to cope with loss of the SLOG device, but that's been corrected for a long time now.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
While it's nice to have mirrored SLOG, the expense of these devices usually makes it a nonstarter. Mirroring of SLOG used to be required because ZFS used to be unable to cope with loss of the SLOG device, but that's been corrected for a long time now.
But you still run the risk of a loss of up to 5 seconds of writes in the event of an unexpected power failure or if the SSD goes tango uniform, correct? For a few VMs at home that loaf around and don't do much, that's not a real worry... but if you're running a database, mail server, etc., a 5 second loss could be a real problem.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
But you still run the risk of a loss of up to 5 seconds of writes in the event of an unexpected power failure or if the SSD goes tango uniform, correct? For a few VMs at home that loaf around and don't do much, that's not a real worry... but if you're running a database, mail server, etc., a 5 second loss could be a real problem.
Yeah, if the SSD fails at just the right time.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You need a bunch of bad to happen simultaneously. You need the SSD to fail at the same time the power fails.

Otherwise ZFS falls back to the in-pool ZIL, which kills write performance of course, but you're still covered for sync writes. Then the admin gets to decide whether to suffer the poor write performance, or log in and to disable sync writes, or quickly swap in another SSD (assuming some hot-swap capability exists, which it won't for things like a HHHL PCIe NVMe card). It might be worth noting that even for the HHHL PCIe card situation, the alternative choice is to have a standby card sitting idle in the machine, so that the admin can log in and perform a replacement operation without downing the machine. This has the huge advantage of not needing a downtime PLUS not relying on a SSD that has already had the crap beaten out of it in mirrored-SLOG mode.

The only case I can really see for mirrored SLOG making sense is where you absolutely cannot take the hit of suffering poor write performance for a little while.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
My experience with SSDs has been that they tend to fail when power is restored. Knowing my luck, the system would suffer an unexpected power loss, then the SSD wouldn't come back up. I'll have to think about it... I have enough RAM in my box to make L2ARC reasonable - I suppose I could drop back to a single SLOG device and repurpose the other one for L2ARC... decisions, decisions. Or I could fill up the other 16 RAM slots...

My apologies to the OP for the thread hijack ;)
 

mullinsj08

Dabbler
Joined
Jan 4, 2016
Messages
13
@jgreco I've taken your advice on the Xeon CPU...I'm selling the Supermicro X9DRI-F board and getting an X10SRI-F board instead. Also the x2 E5-2609 v2's will be replaced by a E5-1630 v3 which can be had for about £300 in the UK.
 
Status
Not open for further replies.
Top