Right now, the effective max input to the server would be 40GB, since we have 2 10gb links from each of our 3 servers. That said, the third server is a dev box that doesn't really do much. The SAN has 4 10gb links, so even if all 3 were pushing their max, it would only go to 40.10Gbps? With nine HDD vdevs? Very, very, very sketchy. It totally depends on what you're actually expecting out of the thing. I would think you'd probably be fine at 1Gbps, except that I know I could break it if given a free hand to place a torturous workload on it. If you aren't popping surprise stressy write workloads on it that break the ZFS write throttle, 10Gbps could be fine, but I guarantee it to be breakable if you get even moderately aggressive.
We could easily drop it down to 20GB max by removing 2 of the 10GB links. We could further slow to a max of 10GB by turning off round robin on the VM hosts, so that only one 10GB nic is used at a time, reserving the secondary as failover.
If we slow to 10GB, I think we can squeak by on 9 vdevs because our workloads aren't that intensive under normal circumstances. In fact, the setup we have now works as long as I don't start a huge file copy or a live vMotion.
I know the write throttle for ZFS can be tuned; if we were to make adjustments to that, what would you suggest? I think the default is 60% before it starts throttlig. Would lowering to 40 have a any negative consequences? Is there a way in TrueNAS to traffic shape to slow the packet traffic down? We have 10G dell switches that all these connect to. Not sure of the specs, but they may have some shaping capabilities, as well.
What think ye?