Does the starting size of the vdev and how it's initially laid out cause any problems once expanded 10 - 20X it's original size?
There's potentially a risk here when you start talking about that many multiples of size.
When ZFS is given a disk, it carves it up into smaller chunks called "metaslabs" which it uses for various purposes; the note though is that it carves in sizes of 2^N, much like ashift values, and it tries to get as close to (but not over) 200 metaslabs as possible.
So for a 1TB disk, you end up with 125 slabs of 8GB in size.
However, metaslab size doesn't change when you increase the vdev size. So if you go to 10TB, you end up with the same 8GB slab size, but you have 1250 of them. 20TB and that's 2500 per disk.
The housekeeping and behavior of ZFS under conditions like this aren't well-documented or well-observed. It's entirely possible that the impact is limited to needing more CPU and RAM for juggling the higher slab count, or it could result in some weird overflow or nasty condition. The "slab target" of 200 was put there for some reason (although we should probably try to track down the original coders to see if they can explain exactly why that number was chosen) and going that far off-target might cause weird behavior.
By that time though we may be into another situation similar to how ashift=9 performs abysmally on a 4Kn disk though, which can only be resolved by a new pool creation.