That sentence is probably deceptive to a newcomer.
Having a separate SLOG device is better than committing writes to the in-pool ZIL, which would cause write amplification/shorten device life.
But you could also just turn off sync writes and get the same benefit, so this isn't really a "need for SLOG". The justification for SLOG is independent.
They won't work for "write cache." SLOG doesn't work that way - it is no kind of cache.
https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
If "z volumes" means RAIDZ, that's also not recommended.
https://www.ixsystems.com/community...d-why-we-use-mirrors-for-block-storage.44068/
We have order two Nvme devices 1.6 TB in size each
1 Intel p3700
1 Hitachi IMP 16113
Please find diskinfo output for both he devices.
Intel
diskinfo -wS /dev/nvd0
/dev/nvd0
512 # sectorsize
1600321314816 # mediasize in bytes (1.5T)
3125627568 # mediasize in sectors
131072 # stripesize
0 # stripeoffset
INTEL SSDPEDMD016T4K # Disk descr.
CVFT543300881P6DGN # Disk ident.
Yes # TRIM/UNMAP support
0 # Rotation rate in RPM
Synchronous random writes:
0.5 kbytes: 23.0 usec/IO = 21.2 Mbytes/s
1 kbytes: 22.5 usec/IO = 43.5 Mbytes/s
2 kbytes: 22.4 usec/IO = 87.2 Mbytes/s
4 kbytes: 21.6 usec/IO = 180.9 Mbytes/s
8 kbytes: 24.9 usec/IO = 314.1 Mbytes/s
16 kbytes: 31.3 usec/IO = 499.4 Mbytes/s
32 kbytes: 43.1 usec/IO = 724.5 Mbytes/s
64 kbytes: 68.4 usec/IO = 914.1 Mbytes/s
128 kbytes: 142.3 usec/IO = 878.7 Mbytes/s
256 kbytes: 221.8 usec/IO = 1127.1 Mbytes/s
512 kbytes: 361.1 usec/IO = 1384.7 Mbytes/s
1024 kbytes: 656.9 usec/IO = 1522.4 Mbytes/s
2048 kbytes: 1253.6 usec/IO = 1595.3 Mbytes/s
4096 kbytes: 2457.9 usec/IO = 1627.4 Mbytes/s
8192 kbytes: 4850.0 usec/IO = 1649.5 Mbytes/s
----------------------------
diskinfo -wS /dev/nvd1
/dev/nvd1
512 # sectorsize
1600321314816 # mediasize in bytes (1.5T)
3125627568 # mediasize in sectors
0 # stripesize
0 # stripeoffset
UCSC-F-H16003 # Disk descr.
SDM00000DE7A # Disk ident.
Yes # TRIM/UNMAP support
0 # Rotation rate in RPM
Synchronous random writes:
0.5 kbytes: 25.8 usec/IO = 18.9 Mbytes/s
1 kbytes: 25.9 usec/IO = 37.7 Mbytes/s
2 kbytes: 26.5 usec/IO = 73.8 Mbytes/s
4 kbytes: 26.5 usec/IO = 147.4 Mbytes/s
8 kbytes: 30.6 usec/IO = 255.5 Mbytes/s
16 kbytes: 38.5 usec/IO = 406.3 Mbytes/s
32 kbytes: 51.9 usec/IO = 602.6 Mbytes/s
64 kbytes: 104.4 usec/IO = 598.5 Mbytes/s
128 kbytes: 187.2 usec/IO = 667.6 Mbytes/s
256 kbytes: 222.4 usec/IO = 1124.3 Mbytes/s
512 kbytes: 460.6 usec/IO = 1085.5 Mbytes/s
1024 kbytes: 488.2 usec/IO = 2048.5 Mbytes/s
2048 kbytes: 952.8 usec/IO = 2099.0 Mbytes/s
4096 kbytes: 1878.6 usec/IO = 2129.3 Mbytes/s
8192 kbytes: 3751.3 usec/IO = 2132.6 Mbytes/s
I have created 2 Zvol 5TB in size each . One ZVol with sync Off and another one with Sync always. The iscsi drive with sync always performance very pool compare to sync off iscsi Please find attached report.
root@freenas[~]# zpool status
pool: dell730Mini
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
dell730Mini ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/b5ecfc17-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/b6aa0a9a-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/b74eaae8-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/b7faa461-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/b8ca3219-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/b9861c04-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/ba6278ca-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/bb2d2353-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/bc1ac869-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
gptid/bd09b311-c193-11e9-ac52-1418772dfffb ONLINE 0 0 0
cache
gptid/4d6258d1-c1a6-11e9-ac52-1418772dfffb ONLINE 0 0 0
root@freenas[~]# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
dell730Mini 2.47G 17.4T 0 163 0 2.83M
raidz2 2.47G 17.4T 0 163 0 2.83M
gptid/b5ecfc17-c193-11e9-ac52-1418772dfffb - - 0 23 29 520K
gptid/b6aa0a9a-c193-11e9-ac52-1418772dfffb - - 0 23 29 521K
gptid/b74eaae8-c193-11e9-ac52-1418772dfffb - - 0 23 29 520K
gptid/b7faa461-c193-11e9-ac52-1418772dfffb - - 0 24 29 520K
gptid/b8ca3219-c193-11e9-ac52-1418772dfffb - - 0 23 29 520K
gptid/b9861c04-c193-11e9-ac52-1418772dfffb - - 0 24 29 521K
gptid/ba6278ca-c193-11e9-ac52-1418772dfffb - - 0 23 29 520K
gptid/bb2d2353-c193-11e9-ac52-1418772dfffb - - 0 24 29 520K
gptid/bc1ac869-c193-11e9-ac52-1418772dfffb - - 0 23 29 520K
gptid/bd09b311-c193-11e9-ac52-1418772dfffb - - 0 23 29 521K
cache - - - - - -
gptid/4d6258d1-c1a6-11e9-ac52-1418772dfffb 366M 1.46T 0 5 283 953K
-------------------------------------- ----- ----- ----- ----- ----- -----