Hi,
One thing that
@Arwen mentioned early on in this thread still seems relevant.
Can you tell us exactly how this volume was built? In your original post, you said you had the volume, it had data on it and then you added additional mirrors.
Arwen is correct, ZFS attempts to keep all vdevs "level", so if 2 are empty and 2 are half full, it will fill the 2 empty ones until they are all "level" and then it will fill all the vdevs. So, any data that was written before the addition of new vdevs will continue to live only on the vdevs that were present when it was written. (and performance will be limited to what those vdevs can do)
But, it's actually worse than that. Since ZFS is a copy-on-write filesystem, if you change a pre-expansion file, it will free space only on a subset of vdevs, which means at some point in the future, when zfs decides to re-use those free blocks, you will experience lower performance because zfs will write to a subset of of the full pool.
At $dayjob, we store a lot of media files. I realize you've been given a lot of advice here, and I don't dispute any of it, but if I was building what it appears you are trying to build, I would evacuate all my data, destroy the pools and recreate them as a raidZ2. I would add additional disks to get the capacity I needed. I would then use the nvm SSDs you have to apply a slog and cache. since you have 2 SSDs there is no harm in mirroring the slog. I would simply stripe the cache though.
edit,
Actually, your hardware list says you have 12 WD gold drives.. My recommendation would be to Just pull your data off and build a 12 disk Raidz2. In terms of raw disk performance, you should be able to easily reach your 800MB/s number.