scwst
Explorer
- Joined
- Sep 23, 2016
- Messages
- 59
This is from the "duh" department, but to let others learn from my mistakes, here's what I didn't think of with my current, "evolutionary" build.
tl;dr: When replacing mirrored ancient drives (default ashift=9) in place with a pair of new drives (default ashift=12), the new drives are given the old ashift, making ZFS unhappy, and you'll have to destroy and rebuild the pool, which will make you unhappy.
I started testing my first setup with two old 500 GB drives from 2007/2008 in a mirrored configuration, so if something went wrong, it would be no problem. That worked just fine, so I added two other ancient 640 GB WD Caviar Blue (2009/2010) in a mirrored configuration to the same pool. Yay, still no problems, so then I got real: Added a third vdev out of two mirrored 3 TB GB WD Red (2015/2016) to the pool. And verily, all was well.
So I copy a bunch of stuff over from my old Synology, but not too much, as I don't want to completely fill the 3 TB drives yet. The rest goes to external hard drives (twice). I then proceed to take the Synology apart, which gives me 2 TB WD Red (both 2013), one 3 TB WD Red and one 1 TB WD Red (yes, that was a stupid setup, no, I didn't know what I was doing at the time).
Now, next step is easy, right? Replace the ancient 500 GB drives in place one-by-one with the two 2 TB WD Reds, which keeps redundancy. Takes a bit longer than I had been led to expect (three hours), but whatever, works fine. My pool has expanded!
But wait,
One or more devices are configured to use a non-native block size. Expect reduced performance.
and
block size: 512B configured, 4096B native
in the listing for the two 2 TB drives. Huh? This makes no sense to me at first, because
ashift: 9
ashift: 9
ashift: 12
Oops. The first one are the old 640 GB drives, which are 512 sectors pure and simple. Their ashift=9 is legit. The last one are the new 3 TB drives, which are 512/4096, so when creating their vdev, they were set to ashift=12. Correct. And the ones in the middle are the 2 TB WD Reds, which should be ashift=12 as well, but the step-by-step replacement forced ashift=9 on them. Further research shows that you can't replace whole vdevs in place, and the only real solution is to destroy the pool and start over. Well, Scheibenkleister.
All rather logical once you think of it, but I didn't. The good news is that my data survived all of this intact, the pool still works, and is fast enough for what I need it to do. At some point, I'll get some more drives, back everything up externally (twice), and then do it over, this time with new drives only.
tl;dr: When replacing mirrored ancient drives (default ashift=9) in place with a pair of new drives (default ashift=12), the new drives are given the old ashift, making ZFS unhappy, and you'll have to destroy and rebuild the pool, which will make you unhappy.
I started testing my first setup with two old 500 GB drives from 2007/2008 in a mirrored configuration, so if something went wrong, it would be no problem. That worked just fine, so I added two other ancient 640 GB WD Caviar Blue (2009/2010) in a mirrored configuration to the same pool. Yay, still no problems, so then I got real: Added a third vdev out of two mirrored 3 TB GB WD Red (2015/2016) to the pool. And verily, all was well.
So I copy a bunch of stuff over from my old Synology, but not too much, as I don't want to completely fill the 3 TB drives yet. The rest goes to external hard drives (twice). I then proceed to take the Synology apart, which gives me 2 TB WD Red (both 2013), one 3 TB WD Red and one 1 TB WD Red (yes, that was a stupid setup, no, I didn't know what I was doing at the time).
Now, next step is easy, right? Replace the ancient 500 GB drives in place one-by-one with the two 2 TB WD Reds, which keeps redundancy. Takes a bit longer than I had been led to expect (three hours), but whatever, works fine. My pool has expanded!
But wait,
zpool status tank
is really unhappy now about the new mirrored vdev:One or more devices are configured to use a non-native block size. Expect reduced performance.
and
block size: 512B configured, 4096B native
in the listing for the two 2 TB drives. Huh? This makes no sense to me at first, because
diskinfo -v /dev/ada0
and friends gives me the exact same sectorsize/stripesize combination for the 3 TB as the 2 TB drives. After fighting the horrible documentation on zdb
for a bit, I figure out I need to use zdb -U /data/zfs/zpool.cache | grep ashift
, which gives me this for the three mirrored vdevs:ashift: 9
ashift: 9
ashift: 12
Oops. The first one are the old 640 GB drives, which are 512 sectors pure and simple. Their ashift=9 is legit. The last one are the new 3 TB drives, which are 512/4096, so when creating their vdev, they were set to ashift=12. Correct. And the ones in the middle are the 2 TB WD Reds, which should be ashift=12 as well, but the step-by-step replacement forced ashift=9 on them. Further research shows that you can't replace whole vdevs in place, and the only real solution is to destroy the pool and start over. Well, Scheibenkleister.
All rather logical once you think of it, but I didn't. The good news is that my data survived all of this intact, the pool still works, and is fast enough for what I need it to do. At some point, I'll get some more drives, back everything up externally (twice), and then do it over, this time with new drives only.