Check if second mirror has data

TristanM

Dabbler
Joined
Sep 16, 2017
Messages
20
Hi,

My NAS has a single Pool, originally made up of a single mirror set (2 drives), but several months ago, I added a second mirror set (another 2 drives) and now one drive of the second set has failed. Is there a way to check if their is data on the second mirror set to know if I can remove it? (possibly I don't grasp how data is distributed across mirror sets for a single pool)

My hopes on next steps:
  1. Possibly remove the (newer) second mirror set all together, hence asking about checking if data is on it. (Finances limitation - later will add another set)
  2. Purchase a new replacement drive - I assume should be the same size? Therefore if I wanted to upgrade, I would have two buy two hard drives?
Many thanks
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
First:
zpool list -v will show you the distribution amongst VDEVs in a pool.

You can use the zpool remove command if you have enough space in the "remaining" VDEV to hold all pool contents.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Purchase a new replacement drive - I assume should be the same size? Therefore if I wanted to upgrade, I would have two buy two hard drives?
It should be at least the same size: if you buy a bigger one you simply don't use the increased space until you upgrade the other disk as well.
 

TristanM

Dabbler
Joined
Sep 16, 2017
Messages
20
First:
zpool list -v will show you the distribution amongst VDEVs in a pool.

You can use the zpool remove command if you have enough space in the "remaining" VDEV to hold all pool contents.
Thanks for the reply, I tried to remove my mirror but it says "Operation not supported on this type of pool", I must be misunderstanding or doing something wrong. I have attached a screenshot. Any ideas?

[UPDATE] I tried to remove from the UI Pool Status screen and got the same error but here is the full log
Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 226, in __zfs_vdev_operation
    op(target, *args)
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 226, in __zfs_vdev_operation
    op(target, *args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 258, in <lambda>
    self.__zfs_vdev_operation(name, label, lambda target: target.remove())
  File "libzfs.pyx", line 2098, in libzfs.ZFSVdev.remove
libzfs.ZFSException: operation not supported on this type of pool

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 258, in remove
    self.__zfs_vdev_operation(name, label, lambda target: target.remove())
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 228, in __zfs_vdev_operation
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_POOL_NOTSUP] operation not supported on this type of pool
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 138, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1213, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1217, in remove
    await self.middleware.call('zfs.pool.remove', pool['name'], found[1]['guid'])
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_POOL_NOTSUP] operation not supported on this type of pool
 

Attachments

  • zpool-remove.png
    zpool-remove.png
    32.5 KB · Views: 76
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, I think what's happening here is that the command won't work on a degraded pool.

You'll need to detach the removed disk from mirror-1 first, then just remove the remaining single disk (since mirror-1 will cease to exist with only one member).
 

TristanM

Dabbler
Joined
Sep 16, 2017
Messages
20
OK, I think what's happening here is that the command won't work on a degraded pool.

You'll need to detach the removed disk from mirror-1 first, then just remove the remaining single disk (since mirror-1 will cease to exist with only one member).
Hi @sretalla , thank you for you help. So I detached the drive in the UI, now mirror-1 has disappeared, but can't see a way to remove the drive. I tried physically and the entire Pool went offline (tried it twice :eek:). Now I am worried if this second drive that I want to remove goes down, it will take the entire Pool down. Any ideas?


1682448757211.png



I tried to detach the drive but no luck

1682448859523.png


Any ideas?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Make a backup before doing anything else. You are in danger territory
 

TristanM

Dabbler
Joined
Sep 16, 2017
Messages
20
Is this correct? Still can't remove it.

1682454739285.png



@NugentS - This is my backup, any advice on sorting the issue out.


I should have enough space, don't think anything is on the drive.

1682454791671.png
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Can you put the output of zpool status vault into code tags please - its easier to read and indents matter.
Oh and use SSH rather than the crappy GUI shell which messes up the format
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
As a side note, it appears that you are very close to see a drastic decrease of performance since your vdev used capacity is almost at 85%.

Consider rebalancing your pool (look in my signature) once you have fixed the current issue, and don't ever go near or over 90%.
 
Last edited:

TristanM

Dabbler
Joined
Sep 16, 2017
Messages
20
@NugentS - Yes, will try in the future. I don't have SSH clients on all my devices. Considering the information is only a few lines it is still readable. Any ideas on solving how to remove the drive?

@Davvo - Definitely, will do a clean up and save towards some drives. Thanks for the advice.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
@Davvo - Definitely, will do a clean up and save towards some drives. Thanks for the advice.
What I mean is something different from a "simple" clean up: rebalancing the pool once you expand it will rewrite all the data while distribuiting between the vdevs, greatly increasing performance (one vdev 80% full vs two vdevs 40% full, free space = performance for ZFS).

Anyway, it's not the most pressing matter right now.
 

TristanM

Dabbler
Joined
Sep 16, 2017
Messages
20
What I mean is something different from a "simple" clean up: rebalancing the pool once you expand it will rewrite all the data while distribuiting between the vdevs, greatly increasing performance (one vdev 80% full vs two vdevs 40% full, free space = performance for ZFS).

Anyway, it's not the most pressing matter right now.
Got you, too scared to do anything right now until I remove this empty drive
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
rebalancing the pool once you expand it
to my knowledge there is no way to rebalance a pool beyond rewriting all the data, which, unless the data is never touched, would happen naturally.

additionally, to my knowledge, having free space in the pool is what matters, not which specific vdev that free space is in. all writes will go where there is the most free space due to CoW. this would mean that rebalancing the pool doesn't matter, which would be in line with having no way to rebalance the pool.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
ok. that's the "how".
what is the "Why?"?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
what is the "Why?"?
[...] rebalancing the pool once you expand it will rewrite all the data while distribuiting between the vdevs, greatly increasing performance [...]
- reduced fragmentation (on vdev level)
- increase iops since each vdev has its own iops.

Also less work for the single vdevs during scrubs.

Point is since he is doing work for adding another vdev, why not doing some maintenence to the first vdev he has.
 
Last edited:

TristanM

Dabbler
Joined
Sep 16, 2017
Messages
20
Appreciate a bit of banter, but a bit off topic, so how do I get this drive removed?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Thanks for the reply, I tried to remove my mirror but it says "Operation not supported on this type of pool", I must be misunderstanding or doing something wrong.
it's not a perfect process. there are a few conditions that just prevent it.
you first might want to check your pool version. if it's too old, this will not be possible.
one of your earlier screenshots shows there is a zpool upgrade available.
 
Top