How to clear external drive for temporary replication backups

jengle

Dabbler
Joined
Jan 4, 2023
Messages
26
I have a 3 Tib drive (in an USB holder) that I used to backup one of my pools during setup. Now I want to use the same drive to backup 3 specific datasets. I used fdisk to remove the old partitions with w(rite) command but when I rebooted they were still there. Then I went ahead and mounted the single drive (striped) and removed the datasets using the UI (didn't think of that first time around) but leaving the original pool name extMain. When I do my replication task I get two layers of directories but no subdirectories below that and no data files. total space used is just a few kb. Thinking that this may be a permissions issue, I created a dataset 'bu' to send the replication to but still didn't get any further. And then I tried to set the owner & user to admin for the bu dataset and I get:

Saving Permissions​

Error: [EFAULT] [Errno 30] Read-only file system: '/mnt/extMain/bu'

I would prefer using this drive as my only other available drive is a reserve 4 Tib drive. What I would really like to do is just 'erase' the drive and start fresh for this task but have not found an answer searching the forums. I must be missing something basic this does not seem like it should be difficult!

Thanks!
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Make a new pool on the drive with only one disk, but do it using the GUI instead of using fdisk. I think you are thinking too hard into this, and if you just use the UI so the middleware is aware of the new pool then it should work fine.
 

jengle

Dabbler
Joined
Jan 4, 2023
Messages
26
Thanks @NickF I actually did try that at first, which is why I tried the fdisk approach. I was unable to mount it to a pool with a different name. I have no doubt at this point that I'm thinking too hard :smile:!

So (just now) I tried to offline the drive from the pool to see if I could attach it to a different drive and I get:

CallError​

[EZFS_NOREPLICAS] cannot offline /dev/disk/by-partuuid/2ad0d345-a834-449f-a952-e6a7ce75f679: no valid replicas

Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 273, in __zfs_vdev_operation op(target, *args) File "libzfs.pyx", line 465, in libzfs.ZFS.__exit__ File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 273, in __zfs_vdev_operation op(target, *args) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 328, in <lambda> self.__zfs_vdev_operation(name, label, lambda target: target.offline()) File "libzfs.pyx", line 2294, in libzfs.ZFSVdev.offline libzfs.ZFSException: cannot offline /dev/disk/by-partuuid/2ad0d345-a834-449f-a952-e6a7ce75f679: no valid replicas During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 115, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1322, in nf return func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 328, in offline self.__zfs_vdev_operation(name, label, lambda target: target.offline()) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 275, in __zfs_vdev_operation raise CallError(str(e), e.code) middlewared.service_exception.CallError: [EZFS_NOREPLICAS] cannot offline /dev/disk/by-partuuid/2ad0d345-a834-449f-a952-e6a7ce75f679: no valid replicas """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 196, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1335, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1318, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1186, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1157, in offline await self.middleware.call('zfs.pool.offline', pool['name'], found[1]['guid']) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1386, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1343, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1349, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1264, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1249, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.service_exception.CallError: [EZFS_NOREPLICAS] cannot offline /dev/disk/by-partuuid/2ad0d345-a834-449f-a952-e6a7ce75f679: no valid replicas

I have removed all replicas and snapshots. So I shut down the system, disconnected the external drive, and powered it back up. When I try to create a new pool with the one drive I get:

FAILED​

[EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sdg

Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 461, in __run_body rv = await self.method(*([self] + args)) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1186, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1318, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 765, in do_create await self.middleware.call('pool.format_disks', job, disks) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1386, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1335, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 33, in format_disks await asyncio_map(format_disk, disks.items(), limit=16) File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map return await asyncio.gather(*futures) File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func return await real_func(arg) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 26, in format_disk devname = await self.middleware.call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1386, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1346, in _call return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1249, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info.py", line 82, in gptid_from_part_type raise CallError(f'Partition type {part_type} not found on {disk}') middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sdg

Then did an export/disconnect which destroyed the old pool, and I created a new pool with the one drive and got the same error as above. It does not get mounted now.

Rebooted to make sure I have cleared everything out and got the same error as above.
 

jengle

Dabbler
Joined
Jan 4, 2023
Messages
26
switched to using rsync to another server.
 
Top