It's complicated
There are a number of ways to accomplish the task, keeping in mind that Minio is designed to handle the resilience natively which will be redundant with the resilience provided by ZFS.
It's worth noting that Minio's immutability/object lock feature requires erasure coding which can be done on a single server (it doesn't require clustered mode across multiple servers), just multiple disks.
Scenario 1 : Using the standard Minio plugin on top of a ZFS filesystem. In this case, there is no object lock at the Minio layer so that if the server sending data to Minio is compromised, they can request the deletion of the S3 objects. In this case, we can use ZFS snapshots to provide the immutability protection, but there is the question about session integrity as a snapshot may be taken in the middle of a copy operation resulting in an incomplete or mismatched catalog/data situation.
Scenario 1a : As you noted according to the Minio documentation, with the version released on 2022-06-02, you should be able to enable object locking on a single "disk" configuration which would be ideal with an existing ZFS server. But I've been busy this month and haven't yet had a chance to test this out. If you're running TrueNAS Scale, you can use Docker to deploy the latest version, but if you're still on TrueNAS core, you'll have to wait or manually compile it yourself. I've done it once to fix some problems and it's not horribly complicated as it compiles down to a single binary which you can copy into the jail, but we're getting way outside of the standard configuration and operations processes if you're not already used to compiling Go binaries on BSD.
Scenario 2 : Customisation of the Minio plugin to use multiple mountpoints on a ZFS filesystem and with at least 4 mapped directories, erasure coding and object lock can be enabled. Not a good idea since Minio sees them as autonomous endpoints and will try to parallelise IO to the "disks" which are all on the same physical pool which results in massive contention (and bad performance). In addition we add significant parity overhead since Minio and ZFS will both manage parity. Noting that by default Minio will be functionally a RAID1 in terms of the amount of parity data created. In this case, the S3 protocol cannot be used to attack the backups, but a direct attack on the ZFS server could be used to destroy the underlying pool.
Scenario 3 : Customisation of the Minio plugin to use multiple mountpoints on multiple ZFS filesystems with at least 4 mapped directories, erasure coding and object lock can be enabled. Not a great idea as we still have the double parity overhead, but at least we've solved the parallelisation problem. Assuming multiple RAID1 pools, we can distribute across at least 6 pools which allows us to modify the default parity setting in Minio to something closer to RAID6.
Scenario 4 : Same as Scenario 3, but with single disk pools. Don't. ZFS failure modes on single disk pools are catastrophic.
Scenario 5 : Assuming you have the budget, buy a separate server with a bunch of disks and install Minio directly and don't use ZFS at all. In this case you'll want to really lock down the network access connections and possibly disable SSH and/or get MFA on SSH connections.
Like everything it's all a matter of trade-offs and risk profiling. Personally, I'm hoping that the new versions of Minio that enable object locking on single disk setups works as advertised as then I get the benefit of ZFS's proven data protection, auto-healing and so on. That said, if you have a lot of data to store on Minio, setting up dedicated servers would be the best solution from a performance standpoint as Minio is more efficient about spreading IO across available disks, and logically it should be equally robust *in this specific context* as ZFS
Oh, and yes, if you modify the _MINIO_STORAGE_CLASS_STANDARD_ to EC:2 or EC:3 after installation but before writing data it will be taken into account. I think that it should even work after data has been written, as the internal catalog will know which parity scheme was used and new writes will conform to the current settings (untested)
Hope this helps