Need advice on settings on new FreeNAS build

Status
Not open for further replies.

HankC

Dabbler
Joined
Sep 23, 2014
Messages
21
I need advices on the FreeNas settings to see if there is any way to saturate the performance.
Case: Supermicro 846
Motherboard: Asrock EP2C602-2T2OS6/D16
CPU: Dual E5-2620 v2
Memory: 256GB DDR3-1600 ECC Reg
HDD: 6 x 3 TB Mirrored (will have more)
NIC: 4 x 10Gb (2 from Intel X540 onboard and 2 from Intel X520 onboard)

The usage of this NAS is for lab VMs from XenServer, Hyper-V and VMware. All will be connected via either NFS or CIFS (Hyper-V only).
I will disable sync write since this is for lab only and not important losing the data.
What other settings should I configure to sature the LAN?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
With just 6x3TB it is impossible to saturate quad 10Gb. Your disks alone can't even saturate a single 10Gb link.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Will the memory do the cache?

Sure, stuff will be cached, but that'll only take you so far. At some point, it's cheaper to add drives and/or L2ARC.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Interesting topic. Even if you had a flash based pool, could you keep 4 10Gb links saturated? The second you leave the ARC you are hooped. I'd be interested to see even if you experimented with a significant RAM DISK if there was any chance at all that those 4 links will saturate and stay there? Plus how do you generate a suitable workload to maintain that without a massive quantity of gear. The good news is you're in a fine position to test bottlenecks, and I suspect you'll find they aren't your links.

@cyberjock Have you ever seen 4 10Gb links stay maxed on a single box? Seems to me even 600 MB/s x 4 would be pretty amazing.

Good luck. Hope you generate some interesting discussion.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I've seen dual 10Gb saturated. I don't think I've even worked on a box with quad 10Gb. You're talking serious money and there are very very few uses for that. At some point its smarter to have multiple servers than 1 really big (read: expensive) server.
 

HankC

Dabbler
Joined
Sep 23, 2014
Messages
21
Those 4 10gb NIC are onboard so it's not expensive to do that.
All I'll do will be direct attach those to each servers.
I will test it out by end of the week hopefully to see the performance.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Those 4 10gb NIC are onboard so it's not expensive to do that.
All I'll do will be direct attach those to each servers.
I will test it out by end of the week hopefully to see the performance.

Are we ordering the same hardware?

First you have to buy hardware that has 4x10Gb NICs that are supported (that's not always cheap)
Second you need enough CPU and RAM to push that kind of bandwidth just from the ARC
Third you need a dozen or more vdevs (or some other config capable of handling the I/O *and* throughput)

I've worked with systems that have 60 spindles or so and couldn't saturate 2x10Gb LAN connections. No, that server isn't going to be cheap... not by a mile.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The question is without an answer. That depends on so many factors it could be yes, no, or even "kind of".

If you look around the forums AsRock has had some quality issues of late, so I wouldn't recommend it. I'd strongly consider a Supermicro as an alternative.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Heh. That is the first board I thought of when you mentioned x540 drivers were stable, cj. ;) Don't worry HankC, we do appreciate you blazing a trail.

On the memory and performance thing, the only real way to know is to run it. You are well spec'd. Have double the bandwidth of most high end rigs. Ram is plentiful with room for more if need be.

But the pool is slow, you have no flash based storage, and you haven't defined any real workloads beyond the fact that data integrity is secondary to speed. I like to see and make things go fast... so I'll follow along. Plus I like that board if it can be proven by those that go before. Pretty cost effective 10Gb setup for a small number of servers, imho. Plus mpio goodness ;). Fire that thing up and give it a work out.

Personally if it is actually a "go fast" rig I'd have some ssd based pools. 8 will make your pool faster than your network. But you could save the trouble by just shoving them in the esxi boxes on hardware raid if raw speed is really the deal. More likely you are more interested in additional storage and flexibility than raw power. The other question is if you don't need the data integrity of ZFS... why bother with its overhead?

Anyway, good luck. Maybe @jgreco will speculate a little more. He likes fast toys too, he's even pointed out that board. :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've seen dual 10Gb saturated. I don't think I've even worked on a box with quad 10Gb. You're talking serious money and there are very very few uses for that. At some point its smarter to have multiple servers than 1 really big (read: expensive) server.

Depends what you're doing. Quad 10GbE isn't horribly expensive on the server side of things, maybe $200 per port. It's usually the switch side that kills ya, since your typical small 10GbE switch is still up around $400 per port - and you've got to buy a number of them.

If you haven't worked on a box with quad 10GbE, that's going to change. The technology is here. Early adopters look to having one or two interfaces and then maximizing those, but eventually it turns to topology and convenience, where the point isn't necessarily to have four saturated 10GbE but rather an intelligent network design that allows maximum performance out several arbitrary legs at once. This mirrors how 100M and 1GbE were deployed, but in the case of 10GbE there's been years of lag because the technology hasn't gotten as much rapid traction - 1GbE actually turns out to be pretty sufficient and extremely cost-effective for all sorts of needs. The lifecycle of 1GbE has come as a bit of a surprise to those of us who date back to 10base5 days and the rapid evolution that took us from 10Mbps to gigabit in just a bit more than a decade.

Anyways, the real point is that 10GbE is less of an annoyance than LACP with n x 1GbE, and with the latest generation of hardware is just about there.

Second you need enough CPU and RAM to push that kind of bandwidth just from the ARC

RAM is easy, RAM is (relatively) cheap and basically infinitely fast for the purposes of this discussion. CPU is the issue. ZFS is fundamentally using a host processor as your RAID controller, and getting that to work without bottleneck is going to be a problem. If we wanted to, just hypothetically, saturate 4x10GbE, I note that even the basic math says that's 40 times more difficult than saturating 1GbE...

I also note the OP's choice of processor is relatively poor. The 2620v2 is a 2GHz part while the 2637v2 is a 4 core 3.5GHz part and/or the 2643v2 is a 6 core 3.5GHz part. The extra memory made available by the second slow part could be very helpful, but will probably not entirely offset the overall effect of having used slower parts.

Third you need a dozen or more vdevs (or some other config capable of handling the I/O *and* throughput)

Eh, maybe. It really depends on what sort of I/O load is on it. Random VM data is always going to be the killer, and you've probably not got a chance to get anywhere near there unless you go with SSD vdevs. Probably lots of them. But if you can get (most|all) of the working set into the ARC and you're heavy on reads, you might find that even a relatively crappy pool flies ... right up 'til you try to access data that isn't part of the working set.

the memory and performance thing, the only real way to know is to run it. You are well spec'd. Have double the bandwidth of most high end rigs. Ram is plentiful with room for more if need be.

orly... I woulda thought something Fortville would be considered high end... be fun to play with an XL710-QDA2, 80Gbps of yummy network goodness... (note to self, ...write...letter..to...Santa...)

But the pool is slow, you have no flash based storage, and you haven't defined any real workloads beyond the fact that data integrity is secondary to speed. I like to see and make things go fast... so I'll follow along. Plus I like that board if it can be proven by those that go before. Pretty cost effective 10Gb setup for a small number of servers, imho. Plus mpio goodness ;). Fire that thing up and give it a work out.

The big problem with that board is that it is actually designed for one of their 2U ZFS appliance designs, and so if you're only dropping a single CPU in there, which is what you're probably doing for a FreeNAS box, then you only get a single PCIe slot. I spent a lot of time wringing my hands over the possibility of making a hypervisor out of those 2U boxes, but their prebuilt only offers the 2x10GbT. I'm guessing you could request that they custom-build you the ones with the 2T2OS6 but never got as far as the sales inquiry.

Personally if it is actually a "go fast" rig I'd have some ssd based pools. 8 will make your pool faster than your network. But you could save the trouble by just shoving them in the esxi boxes on hardware raid if raw speed is really the deal. More likely you are more interested in additional storage and flexibility than raw power. The other question is if you don't need the data integrity of ZFS... why bother with its overhead

Well the usual issue is that in a hypervisor cluster, you want to be able to migrate VM's. At which point putting the storage on hardware DAS RAID is kind of sucky. You can do HBA's with a shared external RAID shelf (think: HP MSA P2000, etc) but you're still limited as to how many hosts can attach. iSCSI gets attractive because it leverages ethernet, which means you can always repurpose the gear if it turns out you guessed wrong.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Heh. Santa was looking at those XL710's the other night. So many yummy new boards with the new v3's. I'm trying to wait... but I grow weaker by the day ;)
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I went digging for that one once and believe there were driver issues with that marvell controller. Maybe someone else has worked through them? There may be drivers in a different BSD version? It is battery backed DRAM which is exactly what we want. I suspect if it was plug and play it would be VERY popular. But there is a reason why it isn't used.

The second you were ok with sync=disabled ... the SLOG became irrelevant. NFS should fly, CIFS will probably be processor bound, I'm sure the kernel based iscsi can fly... but it's hard to guess at limits without your hardware.

Basically this will boil down to how well you can keep your working set in the ARC, as it is the only thing fast enough to service your NICS. Heavy writes will get choked by your pool at some point, and you are already at full risk for speed. I know jgreco has been underwhelmed by the slower e5's as well, but it will be interesting to see how far you can push them.

As you test and nail down specifics, you give the guys like cyber and jgreco a better chance to help you tweak. Right now all we're going on is you'd like to run the big three hypervisors... and you have decent enough gear to have a shot. Frankly VM and DB workloads are what make my world turn. So every time I see related tuning I perk up.

If you get one of the WAM's to fly, that would be awesome and give you a shot at turning "safe" back on. Basically poor man's fusion-io if it works.

Good luck, I look forward to seeing how things turn out.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I've never seen the device before, but as Marvell has been a mess with BSD I don't have high hopes of it being compatible.

Also because it's a PCIe device I kind of expect it to have a very specific very proprietary driver, which again makes it unlikely to work on FreeBSD.
 

HankC

Dabbler
Joined
Sep 23, 2014
Messages
21
what about tunable settings? Should I just use Autotune or manually?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Please consult the manual on autotune. As for manually tuning, if you think you are expert enough to do it go ahead. Just be 100% sure you know what you are doing. ;)
 
Status
Not open for further replies.
Top