Elliott
Dabbler
- Joined
- Sep 13, 2019
- Messages
- 40
I'm testing a FreeNAS machine with 40GbE and right out of the box I have great performance with iperf. With multiple threads I get steady 39.5Gbps.
Unfortunately a single SMB client maxes out a CPU core around 10Gbps. CPU is Xeon Gold 5127 @ 3GHz. This is probably the best we can do, but it will scale with multiple clients.
NFSv4 throughput is quite bouncy, averaging 12Gbps and jumping up to 20Gb at times on one client. I'm hoping I can get more out of this with some tweaking. How can I increase NFS rsize and wsize? When I try setting them larger on the client, they revert to 128K.
I set NFS to run 32 servers to match vCPU count; should I try higher? For some reason htop only shows 2 nfsd processes, but all 32 cores are bouncing on the graphs. Not sure why. What is the FreeBSD equivalent to /proc/net/rpc/nfsd to see performance statistics? Ideally it would be really cool to put a graph of this in the web GUI.
With NFS server set to async, I can get 15Gb read and 19Gb write. I don't want to do this in production though. I benchmarked the pool locally at 2500MBps (20Gb) read and write to the spindles, not "cheating" with the ARC. So why does NFS sync slow it down? AIUI setting NFS to async with ZFS sync=standard means writes bypass the ZIL. I have SLOG on a NVMe mirror, maybe I should remove this if it's actually slower than the pool...
Unfortunately a single SMB client maxes out a CPU core around 10Gbps. CPU is Xeon Gold 5127 @ 3GHz. This is probably the best we can do, but it will scale with multiple clients.
NFSv4 throughput is quite bouncy, averaging 12Gbps and jumping up to 20Gb at times on one client. I'm hoping I can get more out of this with some tweaking. How can I increase NFS rsize and wsize? When I try setting them larger on the client, they revert to 128K.
I set NFS to run 32 servers to match vCPU count; should I try higher? For some reason htop only shows 2 nfsd processes, but all 32 cores are bouncing on the graphs. Not sure why. What is the FreeBSD equivalent to /proc/net/rpc/nfsd to see performance statistics? Ideally it would be really cool to put a graph of this in the web GUI.
With NFS server set to async, I can get 15Gb read and 19Gb write. I don't want to do this in production though. I benchmarked the pool locally at 2500MBps (20Gb) read and write to the spindles, not "cheating" with the ARC. So why does NFS sync slow it down? AIUI setting NFS to async with ZFS sync=standard means writes bypass the ZIL. I have SLOG on a NVMe mirror, maybe I should remove this if it's actually slower than the pool...