Detatching this one gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76 ONLINE 0 0 0What are you doing when you try to detach it?
Are you detaching the second one in the active Spare part of Mirror 6?
I have not tried it via shell yet, I'm running a scrub so I will wait and let that finish then try it tomorrow.zpool detach PrimaryPool gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76
It "should" be doing that... I guess we'll see if that's really the case or not when you try it.Isn't this what the GUI is doing anyways though? Failing to see why this would work, when the GUI is basically running this command, unless it's not?
zpool detach PrimaryPool gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76
It seems to have worked. I will run a scrub once more to ensure it doesn't kick it back into the pool but I think it should be fine.It "should" be doing that... I guess we'll see if that's really the case or not when you try it.
# zpool status -v pool: PrimaryPool state: ONLINE scan: scrub repaired 0B in 13:23:22 with 0 errors on Wed Aug 30 11:45:15 2023 config: NAME STATE READ WRITE CKSUM PrimaryPool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/d7476d46-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/db71bcb5-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/d96847a9-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gptid/d9fb7757-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/da1e1121-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 gptid/9fd0872d-8f64-11ec-8462-002590f52cc2 ONLINE 0 0 0 gptid/9ff0f041-8f64-11ec-8462-002590f52cc2 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 gptid/14811777-1b6d-11ed-8423-ac1f6be66d76 ONLINE 0 0 0 gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76 ONLINE 0 0 0 mirror-6 ONLINE 0 0 0 gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76 ONLINE 0 0 0 gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76 ONLINE 0 0 0 spares gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76 AVAIL gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76 AVAIL errors: No known data errors pool: boot-pool state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:01:06 with 0 errors on Sat Aug 26 03:46:06 2023 config:
I'm still a little shakey on the exact process for adding in the new drives.First you expand the pool adding a vdev made of the new drives.
Unless I am missing something, you just told me you have to detach them. Which I understand, but that doesn't address the concerns around what happens when you detach them, and considering there are 2 drives in each vdev, if I detach one I don't think it will start resilvering onto the new drives until the next drive is detached. Which I asked if its possible to somehow have it keep both drives in while migrating the data to the new drives, to avoid the possibility of the drive dying while resilvering/expanding onto the new drives.You don't just pull them out you have to [software] detach them.
Ok I just looked at that, it wasn't entirely clear up front and a little mixed in.As far as I am aware, which it should have been addressed in this or in the other thread, if you have enough space and you remove a VDEV all the data in that VDEV is migrated into the others.
EDIT: found it. It was on the other thread.
Migrating To New Drives, Ironwolf HDD Pro? Other?
I want to migrate my drives sometime soon. Currently my system is running mirror pairs of x15 SAS drives. x2 of which are spares. They are all used HGST HUS726040AL4210 & HITACHI HUS72604CLAR4000 drives, stamped with 2016 dates. So they are quite old and are making me nervous. While they are...www.truenas.com
As far as I know, no.Is it possible to keep both drives in and utilized, to lower the risk of one of the drives failing while migrating the data?
You should use the WebUI whenever possibile.So basically I add in the 20tb drives as a new vdev.. then when you guys say "remove" a vdev, I am running detach on both the 2 drives in a single mirror?
I think maybe you are misunderstanding my question. I am using the WebUI. My question simply put is making sure I'm clicking "remove" on the whole vdev, and not "detach" on the individual drives in the vdevs.You should use the WebUI whenever possibile.
[EFAULT] Failed to wipe disks: 1) da15: [Errno 1] Operation not permitted: '/dev/da15' 2) da9: [Errno 1] Operation not permitted: '/dev/da9' Error: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 139, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1236, in _call return await methodobj(*prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1290, in remove raise CallError(f'Failed to wipe disks:\n{error_str}') middlewared.service_exception.CallError: [EFAULT] Failed to wipe disks: 1) da15: [Errno 1] Operation not permitted: '/dev/da15' 2) da9: [Errno 1] Operation not permitted: '/dev/da9'
Naw. It's a failure.Not sure if this error is because it wants it to be run from shell.
Yup, looks like an error so I'd be concerned. It is time for you to generate a bug report I think.But is it not concerning it also says failed to wipe disks?
The spare drives are still there, but they are not active in any of the vdevs.Naw. It's a failure.
Yup, looks like an error so I'd be concerned. It is time for you to generate a bug report I think.
I suspect (this is a guess on my part) that the wipe disk, having being in the 'remove' section, likely was removing the partition data from the spare drive you were trying to remove and then re-establishing the partitions as it needs to. Are da9 and da15 your spare disks now? I ask because your system has changed some and I do not want to assume anything. If they are the spare drives, I would find out if they are good to use now. What if the drives are not setup to actually become a spare again when they need to?
So, I would consider the error important until you find out otherwise.
# zpool status -v pool: PrimaryPool state: ONLINE scan: scrub repaired 0B in 13:44:54 with 0 errors on Fri Sep 1 01:39:56 2023 remove: Removal of vdev 6 copied 1.39T in 3h55m, completed on Tue Sep 5 16:54:55 2023 28.2M memory used for removed device mappings config: NAME STATE READ WRITE CKSUM PrimaryPool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/d7476d46-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/db71bcb5-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/d96847a9-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gptid/d9fb7757-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/da1e1121-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 gptid/9fd0872d-8f64-11ec-8462-002590f52cc2 ONLINE 0 0 0 gptid/9ff0f041-8f64-11ec-8462-002590f52cc2 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 gptid/14811777-1b6d-11ed-8423-ac1f6be66d76 ONLINE 0 0 0 gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76 ONLINE 0 0 0 mirror-7 ONLINE 0 0 0 gptid/8ab56673-4c0d-11ee-8b4c-ac1f6be66d76 ONLINE 0 0 0 gptid/8ab75bbc-4c0d-11ee-8b4c-ac1f6be66d76 ONLINE 0 0 0 gptid/8aa4f83e-4c0d-11ee-8b4c-ac1f6be66d76 ONLINE 0 0 0 spares gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76 AVAIL gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76 AVAIL errors: No known data errors pool: boot-pool state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:01:11 with 0 errors on Sat Sep 2 03:46:11 2023 config: