ViciousXUSMC
Dabbler
- Joined
- May 12, 2014
- Messages
- 49
I have searched web all night and day, then crawled through this forum about 50 pages deep looking for information before I posed.
I am experiencing slower speeds that I should be using my new 10gb network setup, and it is isolated to Freenas so looking for help.
This is the most relevant environment information I can think of.
I have 2 10gb Hosts using Mellanox ConnectX-3 cards over Multi-Mode Fiber on a Aruba S2500 Layer 3 Switch.
Strictly Layer 2, no Layer 3 or VLANs no crazy MTU adjustments. Just a nice basic network setup.
ESXi just one vSwitch and all the VM's are in the same network and port-group.
Host #1 - Windows 10
Newest drivers on my Windows 10 machine from Mellanox
2600K CPU and 16GB RAM
Host #2 - ESXI 6.5
Has about 6 VM's on it, all of them with VMWare Tools and the VMX3 NIC.
Host has dual Xeon L5460's and 96GB of RAM.
FreeNAS is allocated 8vCPU's and 46GB of RAM, the high amount of CPU was for the Plex Jail
Had Freenas 11.1U6 but have upgraded now to 11.2
I also have Two Transmission Jails installed.
Jails were Warden, now migrated to IOCage using the migration script.
I see no contingency for resources on any machine during my testing.
Here is what I can say I know.
Testing using IPERF3 as it is a good way to see transparent network speeds without the overhead of storage subsystems or different tpc/ip protocols.
All of the VM's are using the same host/subnet/uplink
I can test from any of them and get about 9gb/s with Iperf3 using 5 parallel streams.
Freenas gives me about 3gb/s.
**Something is "UP" with jails. When I upgraded to 10gb it pretty much broke my Plex jail. It started to intermittently not want to respond to pings and got sluggish.
I could recreate or dissolve the issue easily with one of two solutions.
Change the ESXi uplink back to 1gb and the Plex jail started to behave again -or- inside of the jail, stop the Plex services and then the jail worked fine even on the 10gb uplink.
I have not noticed any issues with the Transmission jails, they work properly on 10gb.
**However** with all the testing I have been doing to try tunables, tweaks, reboots, etc. I did notice I get a pretty substantial boost in speed if I shut all the jails down. I go from about 3gb/s to 6gb/s if I turn the jails off and there is no way they are using 2 or 3gb/s of bandwidth. I see bursts up to 7gb/s but it seems like you get a high speed for a moment and it quickly drops down and you have to wait a while for some kind of buffer to refill before you can get those kinds of speeds again.
Non of the other machines have any issues and pull 9gb/s constantly.
Here is a video of the Plex Jail issue, and it helps show how my environment is setup: https://youtu.be/3WPrQUoMcIk
There has got to be some kind of magic setting here as the network has been verified 7 different ways from Sunday that all the equipment is functioning properly. I can isolate Freenas as the only slow part not the ESXi host or any other VM.
I guess I will just start with this and see where we go with it. Need more info or specific testing let me know and I'll try to oblige.
I am experiencing slower speeds that I should be using my new 10gb network setup, and it is isolated to Freenas so looking for help.
This is the most relevant environment information I can think of.
I have 2 10gb Hosts using Mellanox ConnectX-3 cards over Multi-Mode Fiber on a Aruba S2500 Layer 3 Switch.
Strictly Layer 2, no Layer 3 or VLANs no crazy MTU adjustments. Just a nice basic network setup.
ESXi just one vSwitch and all the VM's are in the same network and port-group.
Host #1 - Windows 10
Newest drivers on my Windows 10 machine from Mellanox
2600K CPU and 16GB RAM
Host #2 - ESXI 6.5
Has about 6 VM's on it, all of them with VMWare Tools and the VMX3 NIC.
Host has dual Xeon L5460's and 96GB of RAM.
FreeNAS is allocated 8vCPU's and 46GB of RAM, the high amount of CPU was for the Plex Jail
Had Freenas 11.1U6 but have upgraded now to 11.2
I also have Two Transmission Jails installed.
Jails were Warden, now migrated to IOCage using the migration script.
I see no contingency for resources on any machine during my testing.
Here is what I can say I know.
Testing using IPERF3 as it is a good way to see transparent network speeds without the overhead of storage subsystems or different tpc/ip protocols.
All of the VM's are using the same host/subnet/uplink
I can test from any of them and get about 9gb/s with Iperf3 using 5 parallel streams.
Freenas gives me about 3gb/s.
**Something is "UP" with jails. When I upgraded to 10gb it pretty much broke my Plex jail. It started to intermittently not want to respond to pings and got sluggish.
I could recreate or dissolve the issue easily with one of two solutions.
Change the ESXi uplink back to 1gb and the Plex jail started to behave again -or- inside of the jail, stop the Plex services and then the jail worked fine even on the 10gb uplink.
I have not noticed any issues with the Transmission jails, they work properly on 10gb.
**However** with all the testing I have been doing to try tunables, tweaks, reboots, etc. I did notice I get a pretty substantial boost in speed if I shut all the jails down. I go from about 3gb/s to 6gb/s if I turn the jails off and there is no way they are using 2 or 3gb/s of bandwidth. I see bursts up to 7gb/s but it seems like you get a high speed for a moment and it quickly drops down and you have to wait a while for some kind of buffer to refill before you can get those kinds of speeds again.
Non of the other machines have any issues and pull 9gb/s constantly.
Here is a video of the Plex Jail issue, and it helps show how my environment is setup: https://youtu.be/3WPrQUoMcIk
There has got to be some kind of magic setting here as the network has been verified 7 different ways from Sunday that all the equipment is functioning properly. I can isolate Freenas as the only slow part not the ESXi host or any other VM.
I guess I will just start with this and see where we go with it. Need more info or specific testing let me know and I'll try to oblige.