Hi there,
I've recently put in a cheap 10GbE backbone in my SMB server room, consisting of a Netgear ProSAFE Plus XS708E, connected to two ESXi servers with 10GbE Intel NICs built in (SuperMicro boards), and my FreeNAS 9.3 box has an Intel X540T1 card added to it. The switch reports a full duplex 10GbE link between all devices. It should be quite the infrastructure upgrade for $1200.
My FreeNAS server has 16GB of ECC RAM, and 10 Western Digital SE (server grade SATA) 3GB drives with a 256GB SSD SLOG and 256GB SSD ZIL device. The hard drives are configured as mirror VDEVs, leaving a little under 10TB of storage for the array.
My "problem" isn't huge. I definitely see the 10GbE benefit on multiple streams of data going to and from the FreeNAS box, but a single file transfer never goes much past 1GbE speeds (a little higher, but only like 133MB/s). However, if I send multiple files simultaneously from multiple VMs, the aggregated speed seems to be more around 300-400MB/s ... about what I would expect from my RAID10. All VMs are using the VMX3 network card driver, and the VMs state they have a 10GbE connection. The processors have miles of idle time on them while doing the transfers, so it's not a CPU bound issue.
Most of my shares from FreeNAS are CIFS/SMB3, and the VMs involved are mostly Server 2012R2, so it should be using SMB3 which should obviate some of the inherent SMB slowness. I did try an NFS mount with about the same speed results, though.
Is there anything I can check on or modify to increase my transfer rate from my ESXi boxes to FreeNAS (and vice-versa)?
Thanks for any pointers you can provide. I searched some other threads, but didn't find anything quite like this.
EDIT: Oh my, my array seems mighty slow internally. I just ran a dd if=/dev/zero of=testfile bs=1024 count=100000 and got a result of 103MB/s (a poor result from a single drive, much less all 5 VDEVs working in concert). The read wasn't much better. I repeated with a 10GB file and it was consistently bad.
I've recently put in a cheap 10GbE backbone in my SMB server room, consisting of a Netgear ProSAFE Plus XS708E, connected to two ESXi servers with 10GbE Intel NICs built in (SuperMicro boards), and my FreeNAS 9.3 box has an Intel X540T1 card added to it. The switch reports a full duplex 10GbE link between all devices. It should be quite the infrastructure upgrade for $1200.
My FreeNAS server has 16GB of ECC RAM, and 10 Western Digital SE (server grade SATA) 3GB drives with a 256GB SSD SLOG and 256GB SSD ZIL device. The hard drives are configured as mirror VDEVs, leaving a little under 10TB of storage for the array.
My "problem" isn't huge. I definitely see the 10GbE benefit on multiple streams of data going to and from the FreeNAS box, but a single file transfer never goes much past 1GbE speeds (a little higher, but only like 133MB/s). However, if I send multiple files simultaneously from multiple VMs, the aggregated speed seems to be more around 300-400MB/s ... about what I would expect from my RAID10. All VMs are using the VMX3 network card driver, and the VMs state they have a 10GbE connection. The processors have miles of idle time on them while doing the transfers, so it's not a CPU bound issue.
Most of my shares from FreeNAS are CIFS/SMB3, and the VMs involved are mostly Server 2012R2, so it should be using SMB3 which should obviate some of the inherent SMB slowness. I did try an NFS mount with about the same speed results, though.
Is there anything I can check on or modify to increase my transfer rate from my ESXi boxes to FreeNAS (and vice-versa)?
Thanks for any pointers you can provide. I searched some other threads, but didn't find anything quite like this.
EDIT: Oh my, my array seems mighty slow internally. I just ran a dd if=/dev/zero of=testfile bs=1024 count=100000 and got a result of 103MB/s (a poor result from a single drive, much less all 5 VDEVs working in concert). The read wasn't much better. I repeated with a 10GB file and it was consistently bad.
Last edited: