Hey PB, a striped SLOG doesn't seem to make sense as its a single threaded operation. This means it will wait until the write is done to the first device before writing to the second.
While I'm not poopoo-ing your idea, I'm curious if you have any number before and after which shows performance benifits of a striped SLOG? Also, ZuesRAM specs are way above Intels so why mix the 2?
Hey some great questions there. I agree the ZuesRAM should blow the Intel away but it doesn't in actual usage. For the small block sizes ZFS writes the IOPS of the Zues isn't that great :(
Below are some less then real world tests, but what I think it is a good indication of what design will work better under real work loads. The #s come from a VM running on the same ESXi box as FN using a vmxnet3 interface and 9k jumbo frames. FN is 9.2.1.5. The VM runs CentOS 6.4 and simply has a 40 GB 2nd unformated HD stored on a NFS store exported by FN. Compression, dedup, & atime are off on the dataset. Compression is desirable most of the time but since I'm using all zeros it completely screws up my benchmark. The command I'm using is "dd if=/dev/zero of=/dev/sdb bs=4k count=2000000" which writes about a 8.2GB file in 4k chunks. These #s are for clean newly created pools(I've parked my data else where while I do some benchmarking & evaluations).
RAIDZ2 config(4 arrays of 6 disks):
Intel S3500 Only: 130MB/s
STEC Only: 133 MB/s
Both: 133MB/s
STEC Only when running FN 9.2.x version on the bare metal connected to another ESXi box over a SAN only 10GbE network: 92.4 MB/s
Mirror config(12 arrays of 2 disks):
Intel S3500 Only: 132MB/s
STEC Only: 149 MB/s
Both: 145MB/s
Both with STEC in JBOD unit: 136MB/s
STEC Only in JBOD unit: 132MB/s
sync=disabled(for fun only-RAM cache is fun): 1.1GB/s
no SLOG(not fun at all): 7.4MB/s
local(async): 451MB/s
local(forced sync):20MB/s
local(async /w bs=128k): 941MB/s
Big thanks to aufalien for pointing out the ZIL is single threaded. I missed that in my original testing because the SAN was under load and I had the STEC in the JBOD unit at the time(so the STEC only #s where much much lower then above) I did my original testing so I was getting better #s with the strip simply because the Intel had better #s and I failed to test the Intel standalone, I made the mistake that the stripe improved my #s. Now that I've got the STEC on a different LSI card then the main array with it's own dedicated SAS channel the Intel is actually dragging me down.
Final note(ready for flames): I need to confess that the Intel isn't dedicated to the FN array. It's my boot drive for ESXi and I create a 10GB thick Eager Zeroed vmdk on it for my FN VM. As you can see from the above #s considering the cost of a S3500 vs a ZuesRAM the econo setup if pretty dam good. And no I didn't loose the pool when I briefly converted to baremetal for testing, I just force imported the pool and dropped the dead drive from the ZIL stripe because I wasn't smart enough drop it before I converted to baremetal.
Noob note:
don't do this if you haven't had a clean shutdown of the pool.