I can crank up the compression for the initial TimeMachine image (which looks like it has lots of runs of repeated bytes in many of the segments) and then set it to a more performant value afterwards for subsequent updates.
I don't see the point of this.
Presumably, you're running TimeMachine over the network (rather than iSCSI) so your backup is stored in a compressed sparsebundle. So you may not actually see any benefit from compression at all.
Let's say there is a benefit though. TimeMachine completes the inital backup at to a dataset with high compression. Now you drop the compression in order to achieve some speed benefit. As TM starts pruning old backups, the bands inside the sparsebundle will need to be altered and those highly compressed blocks will be rewritten with the lower compression. If you get lucky, there may be significant numbers of bands that are never rewritten because the data is static in the source. I'm skeptical that this would be at all frequent. So, the likely case is that you'd eventually end up with a dataset that is entirely populated by the lower compression.
Just pick the compression level that you think is appropriate and stick with it. (I have lz4 on my TimeMachine datasets and the ratio is at 1; it's already compressed)
My email archive uses gzip-9, jails (ports tree ratio is over 3), syslog (ratio is 11.68!). Pretty much everything else started on lzjb and is now on lz4.