Expected throughput of file shares?

Status
Not open for further replies.

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Hi.

Having lost several hard drives this year to defects I'm seriously considering moving the contents of my 2x4TB secondary storage to a zfs nas.

The pc runs windows 7/64 and the main drive is an SSD. I do though do a lot of photo editing, and some (compressed standard resolution) video editing from the HDDs.

I'm aware that the HDDs are probably able to give me up to 150MB/s whereas filesharing over a lan will be constrained by the 1 GbE lan to 100MB/s, but I'm thinking I could probably live with that.

What I might not be able to live with though is stories that Windows file shares will max out /well/ below that -- I'm reading here of 40MB/s, which might be a problem.

Is there any way at all of getting a shared file system working closer to 100MB/s?

i

(Moved from H&S main section)
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Start by reading jgreco's thread regarding hardware suggestions.

If you follow ALL those recommendations, you're likely to get the performance you desire. Since Samba is single-threaded, you'll want a fast CPU for CIFS. Right size the memory for the amount of storage you have.

Most of the user's who receive 40'ish MB/s with CIFS, are constrained by their CPU. For example, I fall into that category. I wanted a small form factor "server" and got a deal on a HP Microserver. I have a 1.5GHz CPU. But, at the end of the day, my server meets my needs.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Not sure where you've been reading that 40 MB/s throughput rate, but I run FreeNAS with Windows sharing and I get 95-100 MB/s consistently on large file transfers (no encryption).

Keep in mind, your throughput rate will drop substantially when transferring lots of small-sized files, but that's more of a limitation of your hard drive rather than network throughput. Also, in general, read operations will be faster than write operations.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For raw speed:

1) Do not ignore the constant advice to use Intel cards. You may have a card built in but if it's a Realtek, for example, it isn't real good tek. Intel actually maintains a driver development group that actually releases a FreeBSD driver. They're interested in making their cards look good, not just relying on some free software hacker trying to reverse engineer how the card is supposed to work.

2) For CIFS, core speeds are very important. See gpsguy's comment.

3) Avoid fancy features like compression or encryption, or, at least, test both with and without to see if the penalty is acceptable.

4) Use a good ethernet switch. Many of the cheap ones are ... cheap.

5) The HDD's will give you less than 150MB/sec with ZFS. ZFS is massive and has lots of overhead. Four modern 3TB drives in a RAIDZ2 here manages about 60-70MB/sec write - locally. So picking a pool layout that's optimized for performance, rather than excessive redundancy, will help. Mirrored drives are faster.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Many thanks for all your responses. Looks like what I want might be possible.

I'm certainly thinking a Haswell or IB-based 4 core Xeon (or the AMD equivalent) with 16 or 32GB ECC. Intel lan chip, noted. I'm thinking maybe 2 x 1GbE ports to give me the ability to trunk if necessary, at least for non-cifs traffic.

@jgreco: I haven't looked at gbe switches yet, aware my 10/100s will need changing. What do you recommend?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
"Trunk" is a poorly defined word. Some people use it to mean a port with multiple vlans. Some people use it to mean LACP. Many seem to mean both.

If you mean LACP aggregation of your ports, regardless of whether it is CIFS traffic or not, there are some limits as to what is practical, the standards make LACP less useful (for good reason) than you might assume would be the case.

We're about fifteen years into the GbE marketplace. Gone are the days where you had to be wary of switches that weren't doing the heavy lifting in silicon. A lot of the switches around ten years ago had broadcast domain issues (particularly thinking of an Accton unit sold by SMC, Dell, etc) with vlans. Most of it has gotten better since then. We've got a lot of Dell 5324's deployed that are pretty darn reliable - about seven or eight years of continuous operations.

Don't get cheap switches though. Just more room for corners to be cut.

The one thing I would suggest is that if you care about performance above what 1GbE can provide, this is one of the first years that 10GbE is looking moderately attractive. For a small storage network, something like Netgear's XS708E or XS712T looks hard to argue with (note: we're a Netgear Partner but we won't sell you anything). The main downside appears to be that they're reportedly as loud as a small jet.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well the usual problem with LACP is that hashing for a given connection means that traffic for that connection will always flow out one link. If you have three connections, odds are that two connections will go out one link and one out the other, but it is possible for three to go out one link and none out the other. LACP is mostly useful in larger networks where sufficient traffic exists that this effect isn't as noticeable a problem.

There are many small switches but not as many ways to separate the good from the bad. Historically, companies like Foundry have sourced their "entry level" switches (such as the Foundry EdgeIron 24G, another Accton variant) from a different manufacturer and plastered their own label on. It has gotten harder over the years to identify what to expect.

So I'll give you some general advice: go over to NewEgg and look at the reviews for any given device with a critical eye. Remember that even at NewEgg about 1/3 the reviewers are morons.
 
Status
Not open for further replies.
Top