First, let me say that I was able to correct most of my problems by just searching this forum. This has been a great resource, I will not argue with anyone telling me that I have done something stupid. I am aware that the hardware installed is not ideal, and if this is the cause of my performance hit then I must accept the consequences of my choices. This server is used as a backup and performance does not need to be the best or even that good.
The "problem" : I am hitting ~6.4Gbs over NFS from esxi, ideally I would want to max out my 10Gbs uplink. This could just be a case of me not properly understanding the limitations of NFS and the hardware I am using. I can live with this, but want to see what recommendations are made for improvement.
I started with 3 x raidz2 vdevs, but am now running 7 x 3 way mirrors. Oddly I did not see any improvements when making this change, I assume this is because of another limiting factor on my system. The drives are a mix of 7200RPM and 5400RPM models, so I expect performance to match the 5400RPM disks. I honestly do not have the experience to say if this is the limiting factor. This was done with the expectation of replacing the 1TB disks with 2TB as they begin to fail. Eventually we will probably just want the extra capacity. (2.5" disks).
OS Version:
FreeNAS-11.2-U6
(Build Date: Sep 17, 2019 0:16)
Processor:
Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (20 cores)
Memory:
384 GiB
Controller : IT Mode LSI SAS 9207-8i SATA/SAS 6Gb/s PCI-E 3.0 Host Bus Adapter LSI00301
NIC : 10Gb network card is the one sold on Amazon as the FreeNAS Mini-XL upgrade. It shows up as a Chelsio.
Disks : Mix of ST2000LM015-2E8174 and ST91000640SS
The "problem" : I am hitting ~6.4Gbs over NFS from esxi, ideally I would want to max out my 10Gbs uplink. This could just be a case of me not properly understanding the limitations of NFS and the hardware I am using. I can live with this, but want to see what recommendations are made for improvement.
I started with 3 x raidz2 vdevs, but am now running 7 x 3 way mirrors. Oddly I did not see any improvements when making this change, I assume this is because of another limiting factor on my system. The drives are a mix of 7200RPM and 5400RPM models, so I expect performance to match the 5400RPM disks. I honestly do not have the experience to say if this is the limiting factor. This was done with the expectation of replacing the 1TB disks with 2TB as they begin to fail. Eventually we will probably just want the extra capacity. (2.5" disks).
- This server is providing NFS storage to an esxi cluster.
- I am currently testing performance with sync disabled as the SSDs that came with the server, though have decent latency for their age, have throughput limiting me to ~160MB/s. I plan on replacing with Optane PCie cards in the near future.
- The switch is a Unifi x16 10Gb.
- Load is ~150 Windows 10 VMs. Each VM has 11GB of RAM and files are stored on another server. Just OS and apps are on the VMs.
OS Version:
FreeNAS-11.2-U6
(Build Date: Sep 17, 2019 0:16)
Processor:
Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (20 cores)
Memory:
384 GiB
Controller : IT Mode LSI SAS 9207-8i SATA/SAS 6Gb/s PCI-E 3.0 Host Bus Adapter LSI00301
NIC : 10Gb network card is the one sold on Amazon as the FreeNAS Mini-XL upgrade. It shows up as a Chelsio.
Disks : Mix of ST2000LM015-2E8174 and ST91000640SS