No, this goes to the fundamental design of how ZFS works, and the fact that it's like an order of magnitude more complex.
Classic RAID systems only care about presenting a block storage device to the OS, and so they have a long, tortured, but not particularly difficult process involved to run a "wipe" through a RAID5 array to widen the array, using the "old" math ahead of the wipe bar and the "new" math behind it on the newly rearranged part of the array.
Classic filesystems can sometimes do things like "defrag" by reorganizing the locations of files to reduce fragmentation. This works because their filesystems are typically very small (compared to ZFS), and because they don't have complications such as snapshots or dedup to consider, so you are usually only rearranging block links inside one set of metadata.
ZFS, on the other hand, can't easily defrag because that would imply making changes to not only the "current" data, but also any snapshots or dedup copies ("block pointer rewrite"). As a copy-on-write filesystem, that makes it VERY difficult to do this correctly, and, unlike your 100GB Windows NTFS filesystem, a ZFS 100TB or 1PB filesystem typically has orders of magnitude more metadata and complexity to worry about.
And now we circle around to the bad bit. Not only does expanding a RAIDZ involve that sort of challenge, but also ZFS block addresses are based on math tied to the structure of RAIDZ.
The upside is that unlike RAID5, RAIDZ is an abstraction that is entirely within ZFS's mind, so what is supposed to happen with "RAIDZ expansion" is that the actual layout of RAIDZ blocks can be modified simply by changing lines of code in ZFS, creating an abstraction layer where none existed before, that allows for RAIDZ expansion. However, this is still very difficult, and has been years in the making.