Instead of buying a 10€ usb flash drive and expect a failure in a few years, I will try to follow the "newer" recommendation and switch to a cheap SSD boot drive (with less than 30€). But I have no sata ports left in my system (x10slh-f with 6 4TB wd red in raidz2), so I've bought a cheap usb 3 to sata adapter I can find on amazon (made by sabrent, apparently).
I've run something like this. In fact, when my USB thumb drives started failing in odd ways, this was how I migrated. I booted off the remaining functioning USB thumb drive, added the SSD in a USB enclosure, and created a mirror pair. I then added a second SSD on a MB sata port and replaced the remaining thumb drive. Eventually I removed the USB adapter and moved the other drive to another sata port.
My first thought has been to use the SSD also as SLOG for bhyve vdevs. I tested with fio as in [3] and as expected, changing sync to disabled instead of standard has performance benefits (to no one's surprise?), yet it is unclear to me what sync=standard does in a VM context (how does zfs decide to do a sync or async write?). After reading [2] and the fio result, I expect that a SLOG device can make a difference, but I have not tried testing with a ramdisk. Another thing that I could do is just store vdevs for bhyve on the ssd.
There are some significant differences between a device capable of booting TrueNAS, and a device qualified for SLOG duty. Which isn't to say I'm advocating using inferior devices as boot devices. I'm just pointing out that the way TrueNAS is built, you can recover from boot device loss. You really want to review the resources tab above and understand what you are getting yourself into before implementing a SSD SLOG. Making poor hardware choices here can result in data loss. Loosing a boot device is more of an annoyance.
On the topic of how does ZFS decide to do sync or async writes... It's not ZFS that really chooses. The application performing the write initiates the process and may in fact demand it. When it opens a file handle it sets certain options and flags, is it creating a file, appending a file, etc... A couple of these flags (O_SYNC & O_DSYNC) mandate that the write() operation be performed as if a fsync() call was included before the write statement returns to the calling program. Essentially flushing the change to the hardware before the calling program can continue to execute. This is used by a variety of programs to ensure data consistency in transactions where there are separate failure domains and consistency is required. For example, the NFS file sharing protocol requires in the RFC specifications, that all writes be performed "O_SYNC". This is because the calling system is remote, and the server may crash or network access may be disrupted, or any number of other failures that might lead the calling system to assume the data was successfully saved and discard the transaction as complete. NFS is not the only example, SMTP, certain database transactions, some iSCSI operations, etc..., will set the O_SYNC flag and benefit from a fast SLOG. In your bhyve situation, the hypervisor will pass this requirement thru from the virtualized OS & application.
What I care mostly is avoiding the kind of failure I had with my current setup using a usb flash drive, everything else is just nice (and having an ssd with all that space unused looks like a waste to me).
Thanks for enduring to this point in this convoluted question.
Unused space in a flash drive can have benefits to lifespan and wear levelling. But... Don't allow yourself to form a blind spot to best practice on account of the lack of SATA ports on your system. USB is not a reliable storage bus. You most certainly do not want to attempt to implement a SLOG using any kind of USB solution, and I would argue against using your USB boot pool for bhyve storage as well. The neat thing about ZFS is it tracks pool devices by a kind of UUID. You can literally power down, and randomly swap the SATA connections on your existing pool right now. When TrueNAS imports the pool at boot, it will simply stitch the pool back together and run as if nothing happened. Find a way to add some more SATA ports using a supported PCIe card. You can then simply unplug your 4Tb RAIDz2 devices and plug them in to the additional ports. Adding some additional ports for your pool drives will free up your MB SATA connectors for boot SSD's, etc...