Are 2 of My Drives Failed? (See Edit: Moving Data Onto To New Vdev, To Remove Old)

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
The fastest the new drives come home, the safest for your data.
Not entirely, because I'm still not replacing the entire pool.
However, maybe it'd be best for me to add on the old drives to the current vdevs and turn all of them into 3-way mirrors.

That being said, I still am trying to find information on how to actually properly add in the 20tb drives, while removing the other vdevs.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Not entirely, because I'm still not replacing the entire pool.
However, maybe it'd be best for me to add on the old drives to the current vdevs and turn all of them into 3-way mirrors.

That being said, I still am trying to find information on how to actually properly add in the 20tb drives, while removing the other vdevs.
First you expand the pool adding a vdev made of the new drives.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
First you expand the pool adding a vdev made of the new drives.
How would this work though, If I am going to be removing the old vdevs? That's what is confusing me I think. Because the data needs to be moved onto the new drives, so it is safe to remove the old drives.

________________

Basically, if I am re-reading correctly. I can remove x2 of my 4tb vdevs, and replace them with the 20tb vdev. Given I want to keep my current remaining space.

So when I put in these 20tb drives, I can remove 8 of my 4tb drives (2 vdevs) and I should still have my current ~5-6tb of free space to use. Correct?
(If I removed 3 vdevs, I'd lose that free space.)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Sorry, I think I lost the big picture. What is the final configuration you want to have for your pools?

I understand one pool of two mirrored 20TB drives. What is the configuration of the second pool? Are you wanting mirrors of old 4TB drives or a RAIDZ2 of old 4TB drives? And how many drives?
So when I put in these 20tb drives, I can remove 8 of my 4tb drives (2 vdevs) and I should still have my current ~5-6tb of free space to use. Correct?
I'm really confused about what you think your configuration will be. If you make a 20TB Mirror and you are able to place 19.4 TB of data on it, you have no space left. That is assuming you can fit 19.4TB of data on it, I doubt it. So yes, please explain your desired end configuration, I think you posted it in these 5 pages of text but I didn't see it.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You add more space with the big drives vdev, the you detach the drive from the WebUI and the
Sorry, I think I lost the big picture. What is the final configuration you want to have for your pools?

I understand one pool of two mirrored 20TB drives. What is the configuration of the second pool? Are you wanting mirrors of old 4TB drives or a RAIDZ2 of old 4TB drives? And how many drives?

I'm really confused about what you think your configuration will be. If you make a 20TB Mirror and you are able to place 19.4 TB of data on it, you have no space left. That is assuming you can fit 19.4TB of data on it, I doubt it. So yes, please explain your desired end configuration, I think you posted it in these 5 pages of text but I didn't see it.
Nope, another thread.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Sorry, I think I lost the big picture. What is the final configuration you want to have for your pools?

I understand one pool of two mirrored 20TB drives. What is the configuration of the second pool? Are you wanting mirrors of old 4TB drives or a RAIDZ2 of old 4TB drives? And how many drives?

I'm really confused about what you think your configuration will be. If you make a 20TB Mirror and you are able to place 19.4 TB of data on it, you have no space left. That is assuming you can fit 19.4TB of data on it, I doubt it. So yes, please explain your desired end configuration, I think you posted it in these 5 pages of text but I didn't see it.
Look at the thread he linked, but basically replace a few of the vdevs with the 20tb drives as a 3-way mirror. And keep a handful of the 4tb vdevs. So it'd be a mixture of the both. Then when I can allocate more funds in the near future to replacing the others eventually it'd be an all 20tb vdev.
I think it came out to be needing 9 of the 20tb drives to complete it.

So my pool would turn from x7 4tb vdevs, to x5 4tb 2-way mirror vdevs and x1 20tb 3-way mirror vdev, although I may reuse the old 4tb drives and add them on to the others to make them all 4tb 3-way mirror vdevs instead of 2-way
You add more space with the big drives vdev, the you detach the drive from the WebUI and the
But when I add more drives by expanding it, it's in a separate vdev. So the data will be spread across that vdev too.
The data isn't being moved from the vdevs I'm wanting to remove, onto those drives. So I can't just go and put in the vdev, and then completely remove the other vdevs. The data needs to come off of them.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
That being said, I am currently running a scrub again. It seems a little too weird to me. But maybe the drive was just seated weird or something.
On a side note, the scrub didn't find any issues. Interesting
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
But when I add more drives by expanding it, it's in a separate vdev. So the data will be spread across that vdev too.
The data isn't being moved from the vdevs I'm wanting to remove, onto those drives. So I can't just go and put in the vdev, and then completely remove the other vdevs. The data needs to come off of them.
Seems a bit complicated to me. As long as it's clear in your mind, that is what counts and I hope it works out the way you want.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
The data isn't being moved from the vdevs I'm wanting to remove, onto those drives. So I can't just go and put in the vdev, and then completely remove the other vdevs. The data needs to come off of them.
As far as I am aware, that's exactly what should happen if you have enough space. I am talking about the WebUI, not the shell.
Can anyone confirm this?
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Seems a bit complicated to me. As long as it's clear in your mind, that is what counts and I hope it works out the way you want.
Complicated how so? I just don't have the money to replace all of my vdevs with 20tb drives upfront. So I will replace part of it for now basically.
Basically replacing a few vdevs with 1 vdev of higher capcity drives.

Beyond that, as I said I will probably take the drives I'm pulling out, and just attach them but onto the other vdevs to turn them into 3-way mirrors just for added reliability on those vdevs.
So it'd turn the system from 7 vdevs of 4tb 2-way mirrors into >> x1 3-way mirror of 20tb drives, and like 5 vdevs of 4tb 3-way mirrors. Until I can replace those 4tb vdevs with more 20tb vdevs in the future, and eventually weed out all the unreliable old 4tb drives.
As far as I am aware, that's exactly what should happen if you have enough space. I am talking about the WebUI, not the shell.
Can anyone confirm this?
Would like confirmation as well. But yeah something doesn't ring right in my mind with that process. Because the data is sparsed across all vdevs. If I put in the 20tb mirror, and then just go remove the other vdevs, they still have my data on it.
How would truenas know "he is planning to remove these vdevs, so we need to make sure to move all the data over to this new vdev"?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
If I put in the 20tb mirror, and then just go remove the other vdevs, they still have my data on it.
How would truenas know "he is planning to remove these vdevs, so we need to make sure to move all the data over to this new vdev"?
You don't just pull them out you have to [software] detach them.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
You don't just pull them out you have to [software] detach them.
No I get that, but how will truenas know they are staying out and the data needs to be moved off of them onto the new drives?
Could detach not be utilized for other circumstances where you plan to put that drive back in?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
No I get that, but how will truenas know they are staying out and the data needs to be moved off of them onto the new drives?
Could detach not be utilized for other circumstances where you plan to put that drive back in?
Iirc TN doesn't allow you to bring a VDEV offline if doing so offlines your pool.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
So basically how it works is

I add in the 3 20tb drives as a new vdev
Then I go and click detatch on each of the 2 drives in any of my 4tb vdevs
it will auto resilver onto the 20tb drives
and then I can remove them
And then I go and click detatch on each of the 2 drives in another 4tb vdev
It will resilver
And I can remove them

Now the data is on the 20tb drives, and I can go and reattach the 4 4tb drives to the existing pools to turn them into 3 way mirrors.

Correct?

For the first part, should I detatch both drives in a vdev st once, because its gonna kick in my spare drives too
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Btw the new drives came in today, but I will probably run a conveyence smart test on them first, and then and long smart before I pop them in.
My whole rack is getting redone on tuesday so I can't pop them in anyways because if they don't finish resilvering in time I don't want to have to shut the server down during a resilver.


That being said, I am currently running a scrub again. It seems a little too weird to me. But maybe the drive was just seated weird or something.
Also in regards to this, I stated that the pool shows healthy now and it ran another scrub last night and still shows healthy.
That spare still shows in use though, should I just detatch it?

Code:
 zpool status -v
  pool: PrimaryPool
 state: ONLINE
  scan: scrub repaired 0B in 15:04:36 with 0 errors on Fri Aug 25 02:57:48 2023
config:

        NAME                                              STATE     READ WRITE CKSUM
        PrimaryPool                                       ONLINE       0     0   0
          mirror-0                                        ONLINE       0     0   0
            gptid/d7476d46-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-1                                        ONLINE       0     0   0
            gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/db71bcb5-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-2                                        ONLINE       0     0   0
            gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d96847a9-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-3                                        ONLINE       0     0   0
            gptid/d9fb7757-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/da1e1121-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-4                                        ONLINE       0     0   0
            gptid/9fd0872d-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
            gptid/9ff0f041-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
          mirror-5                                        ONLINE       0     0   0
            gptid/14811777-1b6d-11ed-8423-ac1f6be66d76    ONLINE       0     0   0
            gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76    ONLINE       0     0   0
          mirror-6                                        ONLINE       0     0   0
            gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76    ONLINE       0     0   0
            spare-1                                       ONLINE       0     0   0
              gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76  ONLINE       0     0   0
              gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76  ONLINE       0     0   0
        spares
          gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76      INUSE     currently in use
          gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76      AVAIL

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:06 with 0 errors on Sat Aug 26 03:46:06 2023
config:
 
Last edited:

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Detatch those spares.
Just tried detatching that one that is in use and getting and error.

Code:
EZFS_NOTSUP] Cannot detach root-level vdevs

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 232, in __zfs_vdev_operation
    op(target, *args)
  File "libzfs.pyx", line 402, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 232, in __zfs_vdev_operation
    op(target, *args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 241, in <lambda>
    self.__zfs_vdev_operation(name, label, lambda target: target.detach())
  File "libzfs.pyx", line 2158, in libzfs.ZFSVdev.detach
libzfs.ZFSException: Cannot detach root-level vdevs

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 241, in detach
    self.__zfs_vdev_operation(name, label, lambda target: target.detach())
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 234, in __zfs_vdev_operation
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_NOTSUP] Cannot detach root-level vdevs
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 139, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1236, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1103, in detach
    await self.middleware.call('zfs.pool.detach', pool['name'], found[1]['guid'])
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1279, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1244, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1250, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1169, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1152, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_NOTSUP] Cannot detach root-level vdevs
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Just tried a reboot and it's still not letting me detatch it. Hmm
 
Top