I did some benchmarking in the meantime.
Here are my results in short:
Scenario:
2 identical freenas boxes, config as shown somewhere above.
1 connected via 2 x 1 GBit/s lacp link aggregation.
1 connected via 2 x 10 GBit/s lacp link aggregation.
1 test VM running Gentoo Linux under ESX 6.5 server connected via 2 x 10 GBit/s.
I used sysbench with the theese command:
Code:
sysbench --test=fileio --file-total-size=4G prepare
sysbench --test=fileio --file-total-size=4G --file-test-mode=rndrw --max-time=240 --max-requests=0 --file-block-size=4K --num-threads=4 --file-fsync-all run
(Values in MiB/s)
----------------------------------------------------------------------------
without ZIL
1 vdev 24 x 8 TB Z3 @1 GBit/s Network:
read 0.84 write 0.56
1 vdev 24 x 8 TB Z3 @10 GBit/s Network:
read 0.83 write 0.55
2 vdevs 12 x 8 TB Z2 @1 GBit/s Network:
read 0.91 write 0.61
2 vdevs 12 x 8 TB Z2 @10 GBit/s Network:
read 0.90 write 0.60
3 vdevs 8 x 8 TB Z2 @1 GBit/s Network:
read 0.95 write 0.63
3 vdevs 8 x 8 TB Z2 @10 GBit/s Network:
read 0.94 write 0.63
----------------------------------------------------------------------------
with ZIL
1 vdev 24 x 8 TB Z3 ZIL 10 GB @1 GBit/s Network:
read 35.82 write 23.88
1 vdev 24 x 8 TB Z3 ZIL 10 GB @10 GBit/s Network:
read 52.63 write 35.08
2 vdevs 12 x 8 TB Z2 10 GB ZIL @1 GBit/s Network:
read 33.22 write 22.15
2 vdevs 12 x 8 TB Z2 10 GB ZIL @10 GBit/s Network:
read 50.20 write 33.47
3 vdevs 8 x 8 TB Z2 ZIL 10 GB @1 GBit/s Network:
read 21.94 write 14.62
3 vdevs 8 x 8 TB Z2 ZIL 10 GB @10 GBit/s Network:
read 50.99 write 33.99
----------------------------------------------------------------------------
Since I haven't had the time and resources to build up a whole separate testing environment I used our production switches and ESX servers.
But I think the results show a the same trends as they would with complete separate hardware.
The ZIL brings a lot of more write throughput as expected, since I only testes sync writes.
I'm a bit surprised about the different read throughput, since everything should be in memory.
Also the "more power" that the lower latency of the 10G connection brings is great.
So now I have to think a bit about it, but I tend to have only 1 vdev with 24 x 8 TB Z3 with 10 GB ZIL. Looks like this setup brings reasonable performance, enough redundance, and also the most usable space and flexibility.