I can't warrant or stand over my opinion as I haven't seen your exact scenario. Use this at your own risk!
When the discs are moved to a new controller, the raw block devices will increase in effective size.
Assuming a default FreeNAS disc layout (GPT label, swap partition, ZFS partition), the GPT consistency check will probably fail. The secondary label that was once at [tail-16KB] will now be at [tail-16KB-extra_uncapped_space]. On FreeBSD, I think this means that the partitions won't enumerate into /dev, meaning that the ZFS device won't appear for remounting or import.
No problem. Use "gpart recover /dev/<device_name>" to rebuild the secondary label at [tail-16KB] from the primary at [head+512]. You might need to reboot and reimport, but once done to all discs, the zpool should simply work again.
It's a secondary question as to how you want to manage resizing the ZFS-containing partitions to fill the entire discs. Worst case, assuming redundant devs/vdevs, you can de-attach, repartition/resize and re-add each device one at a time. The zpool will resize once all are done. Best case, you might be able to simply export the pool, resize the ZFS-containing partitions and reimport with autoexpand enabled.
I think I've seen this latter scenario working OK while messing with restricted-size whole-disc vdevs on gnop devices in the past: export the pool, destroy the gnop devices, re-import via the raw block devices and ZFS will resize the zpool is autoexpand is enabled.
Practice the whole thing beforehand in a VM if your data is important. Resizing the virtual discs from (e.g.) 2Tb to 4Tb should recreate the exact same scenario in a sandbox.