SOLVED [EBUSY] Failed to delete dataset: cannot destroy 'ssd/pve-lun0': dataset is busy

danwestness

Dabbler
Joined
Apr 2, 2022
Messages
22
ENVIRONMENT: TrueNAS-SCALE-22.02.2.1

I am attempting to delete a ZVOL and keep receiving the error that the dataset is busy.
I have attempted a number of things to resolve this, unsuccessfully.
1658108938226.png


I have restarted TrueNAS multiple times.
I have ensured there are no NFS / iSCSI / SMB targets referencing this
I disabled all services (NFS / ISCSI / SMB)... Disabled them from the auto-start, and restarted fresh to make sure none of them were running at all
There are no snapshots

I removed all replication jobs that previously referenced this ZVOL
I have attempted from the WebUI as well as from the SSH shell via various command options. All yielding the same 'dataset is busy' response
zfs destroy ssd/pve-lun0
zfs destroy -f ssd/pve-lun0
zfs destroy -fr ssd/pve-lun0
zfs destroy -fR ssd/pve-lun0

I attempted to use a PRE-INIT command to delete the ZVOL at a low level point in system startup with the same commands and no luck there either.
I have inspected running processes, lsof, etc and cannot find anything using this ZVOL that would cause it to be busy.

This is all i see in the logs as well:
[2022/07/17 21:39:39] (ERROR) ZFSDatasetService.do_delete():988 - Failed to delete dataset
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 981, in do_delete
subprocess.run(
File "/usr/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['zfs', 'destroy', '-r', 'ssd/pve-lun0']' returned non-zero exit status 1.

I am at a complete loss of how to delete this ZVOL and reclaim the space it is occupying. Please help!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You may need to run the destroy from CLI and use the -f flag to force it.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
As I stated above, that is already a step i tried and it did not work.
OK, but by "did not work", do you mean produced no output?

What did it give back?
 

danwestness

Dabbler
Joined
Apr 2, 2022
Messages
22
OK, but by "did not work", do you mean produced no output?

What did it give back?

Here is the response: Same as without the -f flag.
root@truenas[~]# zfs destroy -f ssd/pve-lun0
cannot destroy 'ssd/pve-lun0': dataset is busy
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
How about zfs unmount on it?
 

danwestness

Dabbler
Joined
Apr 2, 2022
Messages
22
How about zfs unmount on it?

Cannot unmount a ZVOL.. that would just be for filesystems. Tried it anyways:
root@truenas[~]# zfs umount ssd/pve-lun0
cannot open 'ssd/pve-lun0': operation not applicable to datasets of this type
 
Joined
Oct 22, 2019
Messages
3,641
Can you retry, but also add the -v flag? (You don't really need to use the -f flag, but I've come to realize that what you read about a command might not always be a thorough explanation. For example, some flags can have a "double use".)

zfs destroy -vfrR ssd/pve-lun0

Just be careful with -r and -R! Understand what they do.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

danwestness

Dabbler
Joined
Apr 2, 2022
Messages
22
Can you retry, but also add the -v flag? (You don't really need to use the -f flag, but I've come to realize that what you read about a command might not always be a thorough explanation. For example, some flags can have a "double use".)

zfs destroy -vfrR ssd/pve-lun0

Just be careful with -r and -R! Understand what they do.

Here is the output:
root@truenas[~]# zfs destroy -vfrR ssd/pve-lun0
will destroy ssd/pve-lun0
cannot destroy 'ssd/pve-lun0': dataset is busy
 
Joined
Oct 22, 2019
Messages
3,641
Tread with caution.

Are you willing to export the pool and re-import it?

Running out of ideas here.

Last two things I can think of:

1. Export and re-import the pool to try to destroy the zvol again.

2. Export the pool and import it into a separate Linux system (or reboot the same system with a live USB of a recent Linux distro) to try to destroy the zvol.

Either case requires exporting the pool.
 
Last edited:

danwestness

Dabbler
Joined
Apr 2, 2022
Messages
22
Tread with caution.

Are you willing to export the pool and re-import it?

Running out of ideas here.

Last two things I can think of:

1. Export and re-import the pool to try to destroy the zvol again.

2. Export the pool and import it into a separate Linux system (or reboot the same system with a live USB of a recent Linux distro) to try to destroy the zvol.

Either case requires exporting the pool.

Well.. I tried but it won't work since this pool also houses my system dataset for truenas itself
 

danwestness

Dabbler
Joined
Apr 2, 2022
Messages
22
Welp.... I actually got it figured out...

Looks like a side effect me attempting to use Proxmox's "ZFS on Linux" file system once upon a time. It appears proxmox actually SSHed into the truenas host and created a volume group and logical volumes (that are not in the ZFS file system and therefore not part of the output of any of the ZFS commands or the truenas interface)......

After I delete the volume group and partitions i was able successfully delete the defunct ZVOL as well

I found some references here that helped me figure it out: https://forum.proxmox.com/threads/c...n-vg-pv-inside-need-filter-in-lvm-conf.80877/

root@truenas[~]# vgdisplay
--- Volume group ---
VG Name truenas-ssd-lun0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <500.00 GiB
PE Size 4.00 MiB
Total PE 127999
Alloc PE / Size 12800 / 50.00 GiB
Free PE / Size 115199 / <450.00 GiB
VG UUID eUK1Ep-iQfg-yX8t-nRrC-o5g6-1q47-3SPYCQ

root@truenas[~]# vgremove truenas-ssd-lun0
Do you really want to remove volume group "truenas-ssd-lun0" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume truenas-ssd-lun0/vm-105-disk-0? [y/n]: y
Logical volume "vm-105-disk-0" successfully removed
Volume group "truenas-ssd-lun0" successfully removed
root@truenas[~]# zfs destroy ssd/pve-lun0
root@truenas[~]#
 

Benji99

Cadet
Joined
Jan 30, 2023
Messages
9
Welp.... I actually got it figured out...

Looks like a side effect me attempting to use Proxmox's "ZFS on Linux" file system once upon a time. It appears proxmox actually SSHed into the truenas host and created a volume group and logical volumes (that are not in the ZFS file system and therefore not part of the output of any of the ZFS commands or the truenas interface)......

After I delete the volume group and partitions i was able successfully delete the defunct ZVOL as well

I found some references here that helped me figure it out: https://forum.proxmox.com/threads/c...n-vg-pv-inside-need-filter-in-lvm-conf.80877/
I just wanted to chime in. I also had trouble deleting a zvol and dataset and this fixed it. Thanks!
 

molay

Dabbler
Joined
Dec 6, 2022
Messages
22
The same problem troubled me for a weekend...
Thanks to Google and the TrueNAS community, I finally found the cause and solution.
 

MisterDeeds

Cadet
Joined
May 1, 2023
Messages
5
Welp.... I actually got it figured out...

Looks like a side effect me attempting to use Proxmox's "ZFS on Linux" file system once upon a time. It appears proxmox actually SSHed into the truenas host and created a volume group and logical volumes (that are not in the ZFS file system and therefore not part of the output of any of the ZFS commands or the truenas interface)......

After I delete the volume group and partitions i was able successfully delete the defunct ZVOL as well

I found some references here that helped me figure it out: https://forum.proxmox.com/threads/c...n-vg-pv-inside-need-filter-in-lvm-conf.80877/
Thank you very much for sharing! I had exactly the same error with Proxmox 8.1.3 and TrueNAS-SCALE-23.10.0.1 used "ZFS over iSCSI".
 
Top