First, please use this article not as discouragement but a roadmap of issues to work on. Then post here with your results and let us know of your success. I am smart but started out with little experience and could only push things so far. I am very curious on your results and what you find work. Now here are my experiences.
My Background and equipment.
I used to be a professional Film/TV editor and have used various professional Hard Drive servers (ISIS and the like) so I know how editing workstations should operate. I currently have at home a FreeNAS system with 50 drives, 5 Zpools with 10 drives each in RaidZ2 with 10 slots open. 10x4TB (2 Zpools), 10x8 TB (2xZpools), and 10x12TB (1xZpool). All full of Quicktime Videos at AppleProRes HQ 422, HD and 4K. I have a 10 gig switch with 2x10 gig Ports and 6x1gig ports. I have 4 client systems hooked up to the server, but usually only have 2 at the very most going at once. The clients are 3 x macOS and 1 x Windows 7. Mac still use AFP shares (lots of color tags in there so I just leave it as so for now) and the Window connects via SMB. To test real world system speeds, I use Aja Lite and throw 64GB files at the sever (more about why DD in shell mode doesn't give you real world usage later), I can get about 320MB/s write and 300MB/s reads. Overall the system works great but there is a big HOWEVER... Latency and consistency. Meaning video files don't always immediately playback when you hit play (or stop when you hit stop). FreeNAS works, but its not near perfection like an ISIS can dish out to 10 clients. Oh for my CPU hardware: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz (8 cores), 32 GB RAM, Supermicro 9scm MB (I think), Myricom 10 GB SFP+ card, SSD boot drive. No Zil or SLOG. 10 Gig switch is a Netgear 8 port. 2xMacPro's with Intel 520 10 gig cards in one, 1 gig on the other, and a MacMini 2018 10gig built in.
Experience.
1) I run RAIDZ2 because safety is much more important to me than pure speed as this system is a video archive. Also note, 10 drives gives you maximum speed in RaidZ2. With that said I am not entirely convinced drive speed causes latency and consistency issues.
2) I am pretty convinced the latency/consistency issues have to do with client OS, mounts are share points in the client OS, and client network tuning (or lack thereof). Remember, ISIS and similar drive systems have finely tuned load balancing, direct customized ISCSI mounts, and have finely tuned network cards and drivers. Also I theorize these systems might be tuned and customized to deal with "streaming" big video files (Gigabytes) where FreeNAS is not. You can turn on Jumbo frames on all you like, but editing off a FreeNAS... well it works, but not with hiccups - until you solve it. Which brings me to my next point.
3) I never could get clarity if ZIL and SLOG made a big difference when working in Video, meaning large large large sequential files. Just from reading I came to the conclusion that they do not and apparently are much better applied for lots of small files. However, I encourage you to test and might find out differently.
4) To test drive speed, I have run the DD test in shell mode and I do get much better RAW read and write speeds (scrubs report 1-2Gb/s speeds), but in real world, AJA disk test, I did not see that same speed. Oh, make sure you use file sizes larger than your ZFS RAM, otherwise your results will be much faster than in real world use. Its has to do with the way FreeNAS first writes to memory and then dumps to disk. 32Gigs of RAM is great, until you try to write a 48 Gig file. If you use AJA Lite disk test, click the the little graph button at the bottom and you can see the inconsistency of speed. You'll get great speeds and then it drops to near 0 for a moment and the back up again. *** Also note, these issues appear across all my computers, so its not the different network cards or OS. Could be the OS or AFP share are a little worse, as the Windows machine seems to have less latency, but it is still there. ***Note, the Windows machine when it had 10 gig card in there, also saw similar speeds of 350MB/s. Maybe slightly faster than the macOS machines, but not blazingly different.
Advice
1) Stripe drives. Make some 6 drive Vdevs of RaidZ2 and then expand it with another (making it a Stripe) or even a third and the Zpool should scream. You are losing capacity this way ( or needing more drives), but gaining a ton of speed yet keeping it safe. Play around with other configurations if you can't throw that many drives at it.
2) Experiment with ZIL and SLOG. I just never had the time or skill set.
3) As much RAM as you can afford. You can never have too much RAM in a ZFS rig. At least for writes.
4) For Mac clients, turn off the auto generate preview icon. That makes a world of difference of speed opening a video / picture folder in the Finder.
Last:
1) I hope you have the time and skill set to play and report back here. Or someone in the community with a ton more of video experience comes in and not only shows me what I am doing wrong, but gives me real world hacks to implement.
2) Please do not take my tone a dire. The FreeNAS system works great. I have had it for 5 years now and haven't lost s single bit of data. It also works serving video just well enough. I see from your original post that you are trying to make it a video server for multiple workstations in a real live environment. If you are talking about a few Avids (or Adobe Premiere) hitting it all at once and maybe a few AE stations, well, it might struggle. If was doing full time work as an Editor at a professional studio, I would not work on a system that performs like my current FreeNAS box. A few days, sure, but not full time. But perhaps I am the limiting factor. Use my experience to make it work better.