A workaround might be to create a new dataset and zfs send or mv everything into the new dataset. this should force ZFS to spread more evenly, especially if you move it to a temp dataset and then back to the main dataset. It really touches every block and moves it around on disk.
I currently have 44TB of data stored individually on 10 x 2TB drives and 7 x 4TB drives.
On my new ReadyNAS box I have the 10 x 2TB drives and 10 x 1TB drives, which are now all running with no SMART errors.
Maybe my best bet would be the following:
1. Create 2 temp zpools, each with a single vdev consisting of 10 like sized drives. Let's call them zpool1 and 2.
2. Copy content of 7 4TB drives into each zpool until they reach 90%, which is about 21.6TB. That leaves 6.4TB that I need to find a home for. I can probably scare up misc internal and external drives around the house to store that on.
3. Take the 7 4TB drives that are now empty, add the 2 4TB drives that were used for SnapRAID parity, and purchase 1 new 4TB Red, and I now have 10 4TB drives to setup another single vdev zpool with. Let's call is zpool 3.
4. Copy the data from the 10 x 2 TB drives into the newly created pool along with the 6.4TB I temporarily stored elsewhere on my network.
5. Create zpool4 using the 10 disks freed up in step 4
After the above steps are carried out, I will have 4 zpools as follows with no striped data at all:
zpool1 10 x 2TB 90% full
zpool2 10x 1TB 90% full
zpool3 10x4TB 83% full
zpool4 10x2TB 0% full
So if I understand what you're saying correctly, I would need to add additional vdevs to zpool4 to hold ALL the data on zpools 1-3 before starting the mv process?
But if I do that, and then add the vdevs from zpool 1-3 into zpool4, then those would not be contain any striped data of the content already in zpool4.
I guess my ultimate goal is to have my 44TB of data evenly striped across a single zpoll with 7 vdevs in it, each with 10 disks. My configuration would be as follows:
vdev1 10 x 2TB
vdev2 10x 1TB
vdev3 10x 4TB
vdev4 10 x 2TB
vdev5 10 x 1TB (In 3rd chassis not yet in my possession)
vdev6 10 x 1TB (in 3rd chassis not yet in my possession)
vdev7 10 x ?TB (I haven't purchased these yet. They are to replace all my 1TB drives that failed with SMART errors. My guess is I'll go with 4TB Reds for this)
Maybe I should just not worry about striping the data across vdevs and just stand up a single production zpool now and add the 10 x 2TB and 10 x 1TB drives to it, and keep going as I free up existing drives with data on them?