Pool degraded/write errors after adding a new drive.

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
It is indeed possible (on TrueNAS 12+) to remove top-level vdevs from pools that contain only mirrors or stripes; it does consume some small amount of additional RAM, but if the goal is to rebuild eventually then it might be acceptable.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
any way of just removing the drive and even lose that 73GB?
A single drive vdev is a 1-way mirror, so you should be able to remove the drive from the GUI, and have these 73 GB transferred to the 8 TB drive.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
A single drive vdev is a 1-way mirror, so you should be able to remove the drive from the GUI, and have these 73 GB transferred to the 8 TB drive.
It's not though. He's basically added that second drive and striped the pool to include that drive. So now the vdev that used to be 1 disk now has 2 disks.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
…making a pool of two vdevs which are 1-way mirrors. There's no such thing as a "stripe vdev".
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
…making a pool of two vdevs which are 1-way mirrors. There's no such thing as a "stripe vdev".
In a sense, they're all vdevs of one device each. :wink:

It's supported, but it seems like the UI is missing the option for the single-drive removal option.

@bar1 You can run the following command from SSH to

zpool remove -n Bar1-8TB de364a5e-87b8-4b39-8d76-cecc219c7bb8

This will let you know how much RAM will be consumed after the migration. If it's acceptable, remove the -n and re-run the command to do it for real.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
In a sense, they're all vdevs of one device each. :wink:

It's supported, but it seems like the UI is missing the option for the single-drive removal option.

@bar1 You can run the following command from SSH to

zpool remove -n Bar1-8TB de364a5e-87b8-4b39-8d76-cecc219c7bb8

This will let you know how much RAM will be consumed after the migration. If it's acceptable, remove the -n and re-run the command to do it for real.
Thanks , confirm my cloud backup is up to date and will attempt.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
zpool remove -n Bar1-8TB de364a5e-87b8-4b39-8d76-cecc219c7bb8
Memory that will be used after removing de364a5e-87b8-4b39-8d76-cecc219c7bb8: 398K

This is a little unclear to me....what does it actually mean?
"Memory that will be used"
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
zpool remove -n Bar1-8TB de364a5e-87b8-4b39-8d76-cecc219c7bb8
Memory that will be used after removing de364a5e-87b8-4b39-8d76-cecc219c7bb8: 398K

This is a little unclear to me....what does it actually mean?
"Memory that will be used"
It means that ZFS will need another 398K of memory - about a third of a megabyte - to keep track of the location of data that gets moved off of that 18T disk.

Go ahead and re-run the command without the -n, and then you can issue zpool status -v Bar1-8TB to track the process of device removal.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
It means that ZFS will need another 398K of memory - about a third of a megabyte - to keep track of the location of data that gets moved off of that 18T disk.

Go ahead and re-run the command without the -n, and then you can issue zpool status -v Bar1-8TB to track the process of device removal.
So will that move the files from the 18TB on my 8TB, or not?
sorry about all the questions.....
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Yes. -n is for a dry-run; without this option the remove command will do what its name says.
 

bar1

Contributor
Joined
Dec 18, 2018
Messages
115
Hi,
So it started promising:
remove: Evacuation of /dev/disk/by-partuuid/de364a5e-87b8-4b39-8d76-cecc219c7bb8 in progress since Sun Jul 2 03:10:00 2023
22.4G copied out of 71.1G at 92.5M/s, 31.49% done, 0h8m to go

and then:
Removal of /dev/disk/by-partuuid/de364a5e-87b8-4b39-8d76-cecc219c7bb8 canceled on Sun Jul 2 03:18:50 2023

Should i remove the files with errors, zfs clear and zfs scrub first?
because i do have some errors again on this disk
 
Top