Yes, replace in place is great. Especially for a vDev that has more disks with errors than redundancy, (like both disks of a 2 way mirror having errors). It does requires another disk slot, (though there can be places to work around that issue).
Back in the bad old days, with a RAID set that had more disks with errors than redundancy, many times you were screwed. Some RAID controllers did not support hot-spares, or adding hot-spares after the fact. And even if they did, if you did not have a free slot, it was time for full backup and restore. Plus, the old method of activating a hot-spare was to logically remove the failing disk from the RAID set BEFORE re-syncing the hot-spare. Thus, worthless in the case of more disks with failed blocks than redundancy.
This replace in place also applies to RAID-Z1, Z2 & Z3. For example, if you have 2, 3, or 4 disks failing on Z1, Z2, or Z3, that's too many for pull and replace. Thus, my desire to have a free slot in any ZFS storage server.
As for working around a lack of a free slot for replace in place, if you have a second pool you can export to free up some slots, that's an option. Or even another vDev in the same pool that is healthy. For example, you have a pool of 2 x RAID-Z2 vDevs, one completely healthy, and one with 3 failing disks. Thus, you could off-line a disk from the healthy RAID-Z2 vDev, to free up a slot for your replace in place of the degraded RAID-Z2 vDev.
Last, it should be clear that ZFS will attempt to cause a disk with a block error to spare out that block. So during a ZFS scrub of a pool you can see checksum or read errors, but still have not real problems left afterward. That's one reason why ZFS wants direct access to the disk controller. Any extra levels of indirection, (Virtual Machine or hardware RAID controller), can prevent that from functioning. Only after all the spares are used up, do you get permanent disk block errors. Then it's past time to replace the disk.