@artlessknave let's keep it just a bit more gentle, please.
I realize that it is frustrating to impart knowledge on those who have preconceived notions, especially when such notions are not entirely unfounded. It's probably not helpful to go around calling people fanbois though.
I've had a number of clients over the years that I've pushed into virtualization, and I absolutely agree with the OP on the benefits of virtualization. I have one client where they used to keep extensive notes about the exact components inside each of their servers, their quirks, when they were bought (think: HDD warranty), etc., and I transformed them into mostly virtualized back around 2015. One day while discussing that, the talk turned to how much nicer it was to be able to look at the virtual hardware manifest and how there weren't the same sort of "quirks", and of course how virtual hardware can trivially be edited. You do replace one set of problems for a new set, stuff like when you have a 1Gbps network with a dozen servers and two dozen clients, you lose bandwidth when you move to a hypervisor that is only connecting at 1G, or backups saturate the hypervisor uplink, or stun times mess with VM's, etc.
But on the other hand, virtualization is a tricky topic. It's pretty trite to set up many kinds of basic VM's. I manage thousands of VM's over many dozens of hypervisors at around a dozen sites. Many of these have nothing *but* hypervisors and switches, because I do infrastructure, even routers, as virtual machines. My environments typically have a minimum of a dozen networks, with routers, firewalls, VPN servers, DHCP servers, DNS servers, NTP servers, syslog servers, Web servers, SQL servers, SLB's, MTA's, mail servers, netmon servers, CA's, and many other things. It's easy to become convinced that virtualization is always easy; it *is* often easy, and if you've deployed thousands of VM's without a counterexample, I can understand the errant attitude.
That's why I try to explain this from a technical perspective.
Virtualization is an incredibly tricky thing. This is why it took VMware years to master it, and why younger hypervisors like bhyve have many more pain points. It's easy to forget that virtualization is a highly complex house of cards, with the x64 platform being a minefield of legacy concessions and vendor quirks, where even the major vendors have problems. Consider the Intel X710 and ESXi, which for YEARS caused intermittent PSOD's (I still haven't seen a sufficient explanation why). The hypervisor then has to emulate an x64 platform for guests. Getting all the little details exactly right so that random guests will work has cost VMware hundreds of man-years of work. If it was easy, we'd have many hypervisors to choose from.
But stacking FreeNAS on top, that's adding an incredibly complicated guest on top, one that demands problem-free access to the storage, even in the case of disk errors, drive failures, or just plain ol' random glitches. In order for this to work correctly, you have to use PCI passthru, which means that you have to also have correct support for that at the hardware, BIOS, *and* hypervisor levels, something that is far from guaranteed. The support in ESXi has gotten pretty good, but many platforms, especially older ones, still cannot do the PCI passthru correctly.
To circle around to the OP, this simply hasn't been shown to work reliably in Hyper-V. Hyper-V didn't even support FreeBSD until around FreeBSD 10.3R (~~2016?), which is around the same time they started offering PCI passthru too. So these are all relatively new things for Hyper-V. I'm fine with you being a guinea pig for Microsoft, but I do want you to be aware of that.