Odd error on boot after putting a small 10gb network together.

Avius

Cadet
Joined
Nov 8, 2016
Messages
7
mlxen0: Tso6 disabled due to -txcsum6.
mlxen0: enable txcsum first.
bridge0: error setting capabilities on mlxen0: 35

Spf+ cables
connectx2 cards single ports.
MikroTik CRS305-1G-4S+IN
I havent messed with the mtu since ive read in a few forum posts thats not a thing to do with freenas 11.2.

Everything else is working fine, i can connect and plex is running. But the transfers are capped to 150mbs.
 
Last edited:

F!$hp0nd

Dabbler
Joined
Apr 18, 2016
Messages
13
The transfer rate issue is more due to the speed of the drives rather than the network. Messing with the MTU is not going to do much unless you enable JUMBO FRAMES. Also, there is a difference between MBps = Megabytes per second and mbps = megabits per second. Usually 150MBps is roughly 1500mbps = 1.7gbps which accounts for overhead of TCP. Enabling Jumbo Frames will increase this but only if the drive configuration allows it (ie cache drives and hybrid SSD and Spin).

I have a few boxes that are in ZFS mirror with 24 drives = (raid 10) with 7200rpm drives.
I have a couple boxes that are ZFS mirror with 12 drives = (raid 10) with Intel P4800x drives.

Spin disk box:
Max speed read (over SMB) = 250MBps = 2600mbps
Max write speed (overs SMB) = 210MBps = 2200mbps

NVMe disk box:
Max Speed read (over SMB) = 320MBps = 3300mbps
Max write speed (over SMB) = 410 MBps = 4200mbps

However it is all SFP+.

No single box can ever fully saturate a 10gig network, unless the network enables Jumbo Frames within the core switch and the box is all SSDs. Spin Disks are just not fast enough to saturate the network.


Adding cache drives does help, but because there are over 200 users accessing my boxes and all accessing different files, the cache only helps with the write and therefore allows the read to not be impacted by users writing data to the zpool.

Therefore I would say 150MBps for you is pretty good for just plug and play.

Hope this helps.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
No single box can ever fully saturate a 10gig network, unless the network enables Jumbo Frames within the core switch and the box is all SSDs. Spin Disks are just not fast enough to saturate the network.

Disagree.

I get the same performance you are listing for your NVMe box using 6 WD Red drives in RaidZ2. Read and write 300-400MB/s, I could easily get 600-800MB/s with 1-2 more vdevs.

If you are only getting 210-250MB/s using mirrors there is something very wrong in either design or configuration.
 

Avius

Cadet
Joined
Nov 8, 2016
Messages
7
So with the drives issue i have 4 4t reds in a raidz1 and then i have 4 6t hgst in raidz2. Both get the same transfer so i thought i might have something i needed to configure to get better than 150mbs transfer speed.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
There are some tuneable settings you can set. I would also try bypassing the switch for direct connect testing.

I attached the ones I use.
1567789863120.png
 
Top