Unless your drive array is something pretty exotic, you're wasting your time with the Jumbo frame MTU setting. In my experience getting Jumbo frames to work correctly across different OS's, different NIC vendors, and switches is seldom worth the payoff in a 1GbE environment, much less in a 10GbE one. My 1600 MiB/sec performance benchmark was done without the use of Jumbo Frames...
While I agree with everything else that has been recommended, let me just put in that in my experience with VMware and FreeNAS as an iSCSI SAN, jumbo frames are /absolutely/ worth it, if you’ve got a dedicated physical network or are isolating traffic via VLANs. And it’s especially beneficial if you are bandwidth-constrained.
It’s correct that the setup can be tough to get right and tough to validate.
You set up jumbo frames in the following 4 places. I’ll go inside-out:
1) on the freenas network interface settings - each interface. Use the max of 9000
2) on the physical switch connecting freenas to your VMware iSCSI network. Make sure the switch is set to pass the max size frame, which should be 9216, for each iSCSI port
3) on the distributed switch or vSwitch’s configuration settings. Set MTU to 9000.
4) on each VMkernel attached to the iSCSI hba, inside the host network config in VMware. Set MTU to 9000.
To test it, you will want to send pings to the FreeNAS iSCSI portal IPs, over the iSCSI network. Unfortunately, ESXi’s ping is not up to the task because you need the ping options to NOT fragment as well as to set packet size. Do this:
1. It’s different depending on your OS, so it’s time to make a VM of your favorite Mac, Linux, or Windows flavor.
2. attach it temporarily to the iSCSI network using ESXi’s web client or using vCenter. Assign it an IP address that does not conflict with FreeNAS or the VMware iSCSI initiators that is one one of your iSCSI subnets. I use /29 networks for my iSCSI networks because they have room for multiple ESXi hosts, VMs, and a FreeNAS portal IP per subnet.
3. Log into the VM and ping away. Follow this article to know which ping to use with which OS.
https://blah.cloud/hardware/test-jumbo-frames-working/
4. If the ping works then you’re all set. Otherwise check your settings as something will be wrong in one of the 4 places above.
5. Repeat steps 2-4 for each unique iSCSI subnet
6. you can delete or shut off this VM when you’re done testing.
I have set up jumbo frames at MTU 9000 and on a MPIO’d 2x10 GbE network, it boosts my performance by 10-20% depending on the type of traffic. It’s most evident when synchronous writes are present, like vCenter operations, vCenter or other VM backups, or when a VM is mounting read-write NFS file systems on one or more other VMs. However it will boost performance on regular I/O between your VMs and SAN by at least a bit, and esp if you are maxing out your Ethernet bandwidth over iSCSI.
Warning: please /do not/ use jumbo frames if you cannot isolate iSCSI traffic to its own layer 2 or layer 3 domains (preferably with separate physical switches but VLANs all on the same switch works just as well).