I appear to have a similar situation as @ https://www.truenas.com/community/t...th-larger-drives-but-cant-expand-pool.112702/ but that thread ended with inconclusively.
I have a pool, tank1. It is 6 disks, 3x 2-wide mirrors.
originally, it was 6x 8tb drives.
I replaced the first mirror vdev with 2x 10tb drives, and it expanded as expected.
I replaced the second mirror vdev with 2x 10tb drives, and it has NOT expanded as expected.
Since the first vdev expanded automatically, I expected the rest to as well, but I have also tried expanding via the GUI, and via the shell.
Both completed with no error shown, however space has never expanded.
I downloaded my debug logs and went through them, but did not see anything obvious errored in any of the logs. I did see the log where it showed pool expansion commands.
Pool was originally created on bluefin. It was zfs feature upgraded some days later after the cobia upgrade, and before the first vdev expansion.
Where/what should I be looking at next for troubleshooting?
TrueNAS-SCALE-23.10.0.1
hardware:
hp proliant gl360 gen 10, 96gig ecc, 2x xeon gold 5220r 48c 96t, 8 bay sff/2.5" mixed ssd's z2.
lsi sas 2116 it mode -- 0 SAS2116_1(B1) SAS9201-16e
netapp 4243 disk shelf. 24 disks, 6 pools, mostly mirror 4 or 6 wide sets, a couple single disk vdevs as interim storage.
I also had this problem on a previous pool, I ended up moving the data off and re-creating the pool, and moving the data on, I really really don't want to waste another week or more doing the same.
Thanks for any thoughts or recommendations.
I have a pool, tank1. It is 6 disks, 3x 2-wide mirrors.
originally, it was 6x 8tb drives.
I replaced the first mirror vdev with 2x 10tb drives, and it expanded as expected.
I replaced the second mirror vdev with 2x 10tb drives, and it has NOT expanded as expected.
Code:
sudo zpool list -v tank1 sauron2: Sun Nov 5 09:21:21 2023 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank1 23.6T 21.6T 2.01T - - 5% 91% 1.00x ONLINE /mnt mirror-0 9.08T 7.56T 1.52T - - 5% 83.3% - ONLINE 4aa5ef46-b636-4331-80bd-f99e2f965ebc 9.09T - - - - - - - ONLINE af41c1c6-f77c-40f4-aff1-c1455c21d549 9.09T - - - - - - - ONLINE mirror-1 7.27T 7.18T 84.5G - - 9% 98.9% - ONLINE f5f746be-5bb4-4e37-8f28-07d1313a4118 7.28T - - - - - - - ONLINE f119d4ae-0783-4a22-9933-cbafd18b5432 7.28T - - - - - - - ONLINE mirror-2 7.27T 6.85T 426G - - 3% 94.3% - ONLINE 39d8df03-b26c-4053-a28a-78e53c8be433 7.28T - - - - - - - ONLINE 3dd8f8ec-2e88-4f77-a380-586fdb60265a 7.28T - - - - - - - ONLINE
Code:
sudo zpool status tank1 sauron2: Sun Nov 5 09:21:56 2023 pool: tank1 state: ONLINE scan: scrub repaired 0B in 18:04:03 with 0 errors on Sun Nov 5 04:47:09 2023 config: NAME STATE READ WRITE CKSUM tank1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 4aa5ef46-b636-4331-80bd-f99e2f965ebc ONLINE 0 0 0 af41c1c6-f77c-40f4-aff1-c1455c21d549 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 f5f746be-5bb4-4e37-8f28-07d1313a4118 ONLINE 0 0 0 f119d4ae-0783-4a22-9933-cbafd18b5432 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 39d8df03-b26c-4053-a28a-78e53c8be433 ONLINE 0 0 0 3dd8f8ec-2e88-4f77-a380-586fdb60265a ONLINE 0 0 0 errors: No known data errors
Since the first vdev expanded automatically, I expected the rest to as well, but I have also tried expanding via the GUI, and via the shell.
Both completed with no error shown, however space has never expanded.
I downloaded my debug logs and went through them, but did not see anything obvious errored in any of the logs. I did see the log where it showed pool expansion commands.
Pool was originally created on bluefin. It was zfs feature upgraded some days later after the cobia upgrade, and before the first vdev expansion.
Where/what should I be looking at next for troubleshooting?
TrueNAS-SCALE-23.10.0.1
hardware:
hp proliant gl360 gen 10, 96gig ecc, 2x xeon gold 5220r 48c 96t, 8 bay sff/2.5" mixed ssd's z2.
lsi sas 2116 it mode -- 0 SAS2116_1(B1) SAS9201-16e
netapp 4243 disk shelf. 24 disks, 6 pools, mostly mirror 4 or 6 wide sets, a couple single disk vdevs as interim storage.
I also had this problem on a previous pool, I ended up moving the data off and re-creating the pool, and moving the data on, I really really don't want to waste another week or more doing the same.
Thanks for any thoughts or recommendations.
Last edited: