I've recently setup a 12x6TB SAS pool in Raid 10 with an intel p3520 pcie nvme ZIL on a new Supermicro chassis. Ethernet is Intel x740 and x540 PCIe cards, into a Quanta LB6M switch.
In the past we've used iSCSI for hosts to connect to Freenas because we had 1gb hardware and wanted round-robin etc. Now that we're moving to 10gb we decided to test NFS vs iSCSI and see exactly what came about. Our workload is a mixture of business VMs - AD, file server, Exchange, Vendor App A, etc. There is no real ZFS tuning, only a few NIC options (as noted).
Interestingly, iSCSI performs best without Jumbo frames, and NFS seems to perform best with them enabled.
From my untuned results, it looks as though iSCSI is still the way to go for us, from a latency and random performance perspective. Since latency is so important to us, I think that is where we'll end up.
Some questions this brought about: Any way to reduce latency in general? Things REALLY spike when you do full loads, but even the more randoms have worryingly high latencies in the 20+ MS.
Does anyone have any suggestions for tuning either iSCSI or NFS? I'd also be happy to use different tests. I'm just using a few from VMware's IOAnalyzer fling, without any real design to it.
We did intially test NFS with no tuning options, as well as iSCSI, but they quickly fell behind the other results that used "rxcsum txcsum tso4 lro" so they were abandoned.
https://docs.google.com/spreadsheets/d/1J15gXMUIIYfI0xaOP7coELHh9-2CcFZv3VMUPmSlQc8/edit?usp=sharing
Thanks!
EDIT: discovered my inital latency figures were SUMs, not AVGs. I've corrected the data. Still some worrying outlyers, but no longer the 700ms times that scared me.
In the past we've used iSCSI for hosts to connect to Freenas because we had 1gb hardware and wanted round-robin etc. Now that we're moving to 10gb we decided to test NFS vs iSCSI and see exactly what came about. Our workload is a mixture of business VMs - AD, file server, Exchange, Vendor App A, etc. There is no real ZFS tuning, only a few NIC options (as noted).
Interestingly, iSCSI performs best without Jumbo frames, and NFS seems to perform best with them enabled.
From my untuned results, it looks as though iSCSI is still the way to go for us, from a latency and random performance perspective. Since latency is so important to us, I think that is where we'll end up.
Some questions this brought about: Any way to reduce latency in general? Things REALLY spike when you do full loads, but even the more randoms have worryingly high latencies in the 20+ MS.
Does anyone have any suggestions for tuning either iSCSI or NFS? I'd also be happy to use different tests. I'm just using a few from VMware's IOAnalyzer fling, without any real design to it.
We did intially test NFS with no tuning options, as well as iSCSI, but they quickly fell behind the other results that used "rxcsum txcsum tso4 lro" so they were abandoned.
https://docs.google.com/spreadsheets/d/1J15gXMUIIYfI0xaOP7coELHh9-2CcFZv3VMUPmSlQc8/edit?usp=sharing
Thanks!
EDIT: discovered my inital latency figures were SUMs, not AVGs. I've corrected the data. Still some worrying outlyers, but no longer the 700ms times that scared me.
Last edited: