10 Gbit link between FreeNAS and one client

bobb

Dabbler
Joined
Nov 26, 2016
Messages
19
Dear FreeNAS community,

I'm currently using a 6 drive system (in my signature) with an Intel NIC 1GbE connection through a GbE switch to my 2016 MacBook Pro. I've recently upgraded my video camera to 4K and now the files take forever to transfer at ~110mb/s. So I've decided it's time to upgrade to a 10 Gbit connection this year.

Here's my plan:
I think I'd prefer to connect the FreeNAS box and my MacBook through copper (RJ45) to future proof the setup.
However, I'm not really willing to spend $500+ on a new external MacBook Pro 10GbE NIC, new PCI-E 10GbE NIC and a new 10GbE switch. So I'm thinking about getting just the PCI-E 10GbE NIC for the FreeNAS box and the external Thunderbolt 3 10GbE NIC. I'll run a wire from the external 10GbE NIC to the FreeNAS 10GbE NIC, then connect the two directly with no routing/switching hardware in between.
When 10GbE switches drop to around $100 I'll probably buy one and integrate the two devices into the network through just one cable each, effectively getting rid of the 1GbE ports in the FreeNAS box and MacBook Pro.

Now the questions I've got:
1.) Would the proposed solution work? I.e. does a direct 10GbE ethernet connection between the MacBook Pro and the FreeNAS box work with no switches/routers in between and hence no DNS and DHCP servers present?
2.) Would the MacBook also have to be connected to the switch separately or is it going to be able to connect to the rest of the network by going the MacBook->10GbE FreeNAS->1GbE FreeNAS->Network switch route?
3.) What kind of transfer speed would I be able to get? The drives (6x WD RED 6TB) should do around 600mb/s combined if my math is right?

And one last question: Is this a solution you'd recommend or are there any major caveats?

Thanks for your help!

EDIT: System specs as requested for future reference
FreeNAS 9.10.2-U3
Supermicro MBD-X10SLM-F-B
Intel Core i3-4130
4 x 8GB Kingston ECC DDR3 1600MHz
6 x WD RED 6TB (RaidZ2)
SanDisk USB flash drive as boot drive
BeQuiet 630W Pure Power L8 PSU
Antec One case with 120mm Arctic F12 PWM fans
 
Last edited:
Joined
Oct 18, 2018
Messages
969
Hi @bobb, I suspect you're going to want to put off that 10gigabit upgrade.

I'm currently using a 6 drive system (in my signature)
Would you mind editing your post to include your system specs in the post itself? That way as our system changes the post remains relevant to future folks reading this. You can save space by using spoiler tags.

[spoiler="System Specs"]
Specs
[/spoiler]

I'm currently using a 6 drive system (in my signature) with an Intel NIC 1GbE connection through a GbE switch to my 2016 MacBook Pro. I've recently upgraded my video camera to 4K and now the files take forever to transfer at ~110mb/s. So I've decided it's time to upgrade to a 10 Gbit connection this year.
Can you clarify the transfer rate? I suspect it is 110 megabits/sec; network speeds are often reported in bits per second. If this is the case upgrading to 10gigabits per second from 1 gigabit/sec won't give you any performance increase since you're not even saturating your 1 gigabit/sec link.

  • What is the sharing protocol you're using?
  • Are you using sync writes?
  • How are the MB and FreeNAS system physically connected to the network? Wirelessly? Hard-wired? What is the speed of that connection?
  • Do you have any hardware in the middle that may saturated?

1.) Would the proposed solution work? I.e. does a direct 10GbE ethernet connection between the MacBook Pro and the FreeNAS box work with no switches/routers in between and hence no DNS and DHCP servers present?
Yes, you can do that no problem.

2.) Would the MacBook also have to be connected to the switch separately or is it going to be able to connect to the rest of the network by going the MacBook->10GbE FreeNAS->1GbE FreeNAS->Network switch route?
You will likely want to connect your MB and FreeNAS to your box independent of the direct-link you are proposing. It is possible to do it without that via bridges if you absolutely must.

3.) What kind of transfer speed would I be able to get? The drives (6x WD RED 6TB) should do around 600mb/s combined if my math is right?
It depends a LOT on how you set up your system. For read speeds if you're reading data in the ARC you're going to be limtied by your network/protocol. For write it depends on whether you're using sync/async writes.
 

bobb

Dabbler
Joined
Nov 26, 2016
Messages
19
Hi @bobb, I suspect you're going to want to put off that 10gigabit upgrade.

Would you mind editing your post to include your system specs in the post itself? That way as our system changes the post remains relevant to future folks reading this. You can save space by using spoiler tags.

[spoiler="System Specs"]
Specs
[/spoiler]

Done.

Can you clarify the transfer rate? I suspect it is 110 megabit/sec; network speeds are often reported in bits per second. If this is the case upgrading to 10gigabits per second from 1 gigabit/sec won't give you any performance increase since you're not even saturating your 1 gigabit/sec link.
  • What is the sharing protocol you're using?
  • Are you using sync writes?
  • How are the MB and FreeNAS system physically connected to the network? Wirelessly? Hard-wired? What is the speed of that connection?
  • Do you have any hardware in the middle that may saturated?

No, I'm talking about 110 megabytes per second. I'm fully saturating that link. And it's currently a pain to use since I'm somewhat frequently pushing 500-600GB worth of files from a single day of video shooting to the box, then editing off of it, then moving it off the box once it goes into archive status. I suspect that a 10GbE link would improve my situation a lot. I'll probably also add another 1-2 drives later this year, which should push drive read/write speed into the 800 mb/s range (not mbit, I'll let ya know when I'm talking about that).

You will likely want to connect your MB and FreeNAS to your box independent of the direct-link you are proposing. It is possible to do it without that via bridges if you absolutely must.

Thanks, I found the post about bridges elsewhere on the forum. Doesn't seem to be an ideal solution. I guess buying one of those 2 port 10GBase-T and multi 1000Base-T switches might be the best option to integrate the setup into the current gigabit network.

It depends a LOT on how you set up your system. For read speeds if you're reading data in the ARC you're going to be limtied by your network/protocol. For write it depends on whether you're using sync/async writes.

Since I'm accessing rather large files (100gb for a single file isn't unusual) I don't think they fit in the 32GB of RAM that are in the box right now, so I'm almost never reading from the ARC.
 
Joined
Oct 18, 2018
Messages
969
Thanks, I found the post about bridges elsewhere on the forum. Doesn't seem to be an ideal solution. I guess buying one of those 2 port 10GBase-T and multi 1000Base-T switches might be the best option to integrate the setup into the current gigabit network.
This might be a good option because you could use fibre on one side and rj45 on the other.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
1.) Would the proposed solution work? I.e. does a direct 10GbE ethernet connection between the MacBook Pro and the FreeNAS box work with no switches/routers in between and hence no DNS and DHCP servers present?

It will work in the sense that network trafic will flow between the boxes. Will it give you the speed you are looking for is another question. You will need to configure your IP addresses and netmask manually but that is pretty easy.

2.) Would the MacBook also have to be connected to the switch separately or is it going to be able to connect to the rest of the network by going the MacBook->10GbE FreeNAS->1GbE FreeNAS->Network switch route?

You can do routing in FreeNAS, but this is not recommended. FreeNAS is meant to be a storage appliance and only that. What you described here would turn FreeNAS in a router. Possible but not recommended from FreeNAS point of view.

That would also not be recommended from your Macbook point of view. Once behind a router, no matter It is a real good one or a not-so-recommended one like this setup, it will not be in the same broadcast domain as everything else. Airplay and many other protocols will not work anymore because they will not be able to discover each other using broadcasts.

3.) What kind of transfer speed would I be able to get? The drives (6x WD RED 6TB) should do around 600mb/s combined if my math is right?

Only trial and error will say... Is your pool capable of 600 mb/s ? Not sure. A single Raid-Z2 pool has basically the speed of a single drive. The reason is that all drives are always required for every access. You did not said anything about compression or CPU clock speed. Will that CPU be able to compress / decompress on the fly that much data ? Hope you did not touched dedup or pool level encryption....

You will not end up slower than what you have now. You will surely gain something. But how much is speculation....

Overall, I would not do such a setup myself. Keep FreeNAS as a storage appliance and should you need that actual speed, than you have your business case to do it properly.

Good luck,
 

bobb

Dabbler
Joined
Nov 26, 2016
Messages
19
Only trial and error will say... Is your pool capable of 600 mb/s ? Not sure. A single Raid-Z2 pool has basically the speed of a single drive. The reason is that all drives are always required for every access. You did not said anything about compression or CPU clock speed. Will that CPU be able to compress / decompress on the fly that much data ? Hope you did not touched dedup or pool level encryption....

You will not end up slower than what you have now. You will surely gain something. But how much is speculation....

Overall, I would not do such a setup myself. Keep FreeNAS as a storage appliance and should you need that actual speed, than you have your business case to do it properly.

Good luck,

Sorry, but your first statement doesn't make sense. The total pool speed should be the sum of all individual disk speeds that are part of the pool. So assuming a WD RED writes about 100 mb/s, this should yield a total of 600 mb/s with 6 drives in the pool. dedup and pool level encryption are disabled.

The whole routing issue has been discussed above. I won't route traffic through the NAS.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Sorry, but your first statement doesn't make sense. The total pool speed should be the sum of all individual disk speeds that are part of the pool.

Good for you that you know better than the ones who try to help you then.
 

bobb

Dabbler
Joined
Nov 26, 2016
Messages
19
Just for future reference if someone finds this, there's no need for speculation. Simply test your pool speed using dd, as described here. dd replicates big file write/read operations on the pool and as such provides a good means for measurement in my case.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Just keep in mind the advice to generate random data so that compression doesn't give you a false sense of speed.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Just for future reference if someone finds this, there's no need for speculation.

Wow. I hope you are already inside your place because with the head that large, it won't pass through even a garage door!

First, as @Constantin mentioned, you must use random input in any of your tests. To defeat compression, Yes, but also to defeat caching.

Second, to defeat the caching, you need to do more. For that, you must measure the cache's capacity and ensure its size will be negligeable compared to the entire test.

ZFS options are next : any compression ? dedup ? Encryption ? Multi copies storage ? Other ZFS features ? They must all be neutralized for objective measurement.

How about pool usage now ? Performance drops significantly with increased usage in a pool. ZFS needs easy-to-find large chunks of free space not to waste time searching for such space.

How to deal with the fact that DD is single access when many performance requirements are for random access ? (ISCSI or zVol ; fragmented pool ; VMs ; ... )

So know that there are tons of things that will affect performance significantly. So many that to neutralize all of them will often push you out of your use case anyway.
 

bobb

Dabbler
Joined
Nov 26, 2016
Messages
19
Just keep in mind the advice to generate random data so that compression doesn't give you a false sense of speed.
Thanks for the heads up! Will keep that in mind :)
 

bobb

Dabbler
Joined
Nov 26, 2016
Messages
19
Wow. I hope you are already inside your place because with the head that large, it won't pass through even a garage door!

First, as @Constantin mentioned, you must use random input in any of your tests. To defeat compression, Yes, but also to defeat caching.

Second, to defeat the caching, you need to do more. For that, you must measure the cache's capacity and ensure its size will be negligeable compared to the entire test.

ZFS options are next : any compression ? dedup ? Encryption ? Multi copies storage ? Other ZFS features ? They must all be neutralized for objective measurement.

How about pool usage now ? Performance drops significantly with increased usage in a pool. ZFS needs easy-to-find large chunks of free space not to waste time searching for such space.

How to deal with the fact that DD is single access when many performance requirements are for random access ? (ISCSI or zVol ; fragmented pool ; VMs ; ... )

So know that there are tons of things that will affect performance significantly. So many that to neutralize all of them will often push you out of your use case anyway.
First of all: chill out, please. There’s no reason to get aggravated. Let’s be nice here. Makes talking to each other far more fun. I’m sorry for you if you’re having a bad day.

To stay on topic: the dd test doesn’t provide an exact number to rely on. I think that’s a given for people who know anything about data transmission. Even after an estimate is obtained by means of using dd to write a test file, the actual transmission speed is still affected by noisy networks and other parameters. Nobody’s trying to get an exact number here, but rather a range. Are we talking about 150 mb/s or more like 600? That’s the question to answer, and I think for my use case of writing very large files dd is more than capable of providing such an estimate.

Also a quick note for others, should they find this thread in the future: caches can usually be defeated by writing to them until they fill up. At that point true write-to-disk speeds are exposed.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Let’s be nice here. Makes talking to each other far more fun. I’m sorry for you if you’re having a bad day.

Considering how you dismiss what you did not understood as direct non-sense, lets hope you will apply your own words from now on. So now, you will have to understand why a pool made of a single Raid-Zx vDev has about the performance of a single drive. Lets take your 6 drives RaidZ-2 as an example.

When writing to such a vDev, FreeNAS will need to write at minimum on 3 drives : 1 for the data and 2 for redundancy. Should it needs to write data beyond the capacity of a single block, more data blocks will be used, up to 4. To write to a drive, the drive must sync to a free spot and then start writing. So minimum is to sync and write on 3 drives. Statistically, one of these 3 writes will be slower than average, so performance will be lower than the one of a single drive.

For reading back, all drives containing data must be used at once. Each one will sync and read. Should that data be on a single drive, a single drive will sync and read, so the performance is that of a single drive. If more drives are required, the same stat rule will apply and one of them will be slower than average.

Considering the sync time is by far the longest one, the impact is already there.

Should you have very long sequences to write, each of your drives will sync and can read or write many blocks at once. So for very long sequential reads / writes, you may transfer more in shorter time and go above the performance of a single drive. But this is the only case and unfortunately, it does not happen as much as one can imagine.

zVol and iSCSI are cases where ZFS does not have a clue what the data is. Because ZFS can not handle the data as a long sequence, it can not optimize it. A Database is another example. Data are changed here and there with little sense. These cases are random IO and not sequential.

But even if you do basic file handling, a pool may very well ended up fragmented. Because ZFS is copy on write, a file will be copied to wherever place is available when a change is made to it. After doing enough changes on a pool significantly loaded, the free space will be more and more fragmented. Fragmentation is exactly when sequential data ended up written in a non-sequential space.

So an ideal Raid-Zx pool of long sequential data that does not end up fragmented will provide the performance of more than a single drive. But that ideal situation is rare and fragmentation, databases, VM, zvol, iSCSI, multiple user access at once and more all push you to random IO. When it is time to answer random IO, a Raid-Zx pool does indeed offer basically the performance of a single drive.
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Top