I just wanted to throw a data point into the mix. I'm not suggesting anyone take any particular path, but perhaps you'll find this information useful.
I recently grew a vdev by swapping out six disks with 10TB Ironwolf NAS disks and as soon as the array had grown, I started experiencing the same problems as everyone else here. Drives would throw errors seemingly at random (though usually after/during maintenance tasks) and get kicked out of the zpool.
After about a week and a half of this and a few incidents where two disks were thrown out of the zpool and put my data at risk, I started weighing my options based on the information I got from these threads.
I backed up my FreeNAS (9.10.2) config, installed Debian on my server, and tested ZFS on Linux. I ran scrub after scrub after scrub to stress the array. After three days of scrubs, not one error or kicked disk. With that amount of time and under that amount of load in FreeNAS, I probably would have had four to six drive incidents. After about a week now on Debian, still no errors.
What this means for me is that I'm unfortunately abandoning FreeNAS. It's not really FreeNAS' fault, but this was the only real option I had that didn't require spending another couple of grand on disks and then, at the best case, getting hit with a restocking fee on the Ironwolf disks. So over to Linux I go. I would have preferred to stick with FreeNAS, but given that I got close to losing my data multiple times, I couldn't wait it out and hope that Seagate came up with an answer.
(Also cross-posting this to other thread for people who aren't reading both.)