ZFS CLI tools can't find zpools after 22.12.3.3 update

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
I was trying to get some info about snapshots and compression ratios from the CLI and its saying no pools or datasets available, but they all show up in the GUI.

/usr/bin/bash is what shell is set to for root, and here's what happens when I run `zfs list` and `zpool list`

Code:
root@truenas:/home/admin# zfs list
no datasets available
root@truenas:/home/admin# zpool list
no pools available


Whats the fix?
 
Joined
Oct 22, 2019
Messages
3,641
What is shown with zpool import without any arguments?
 

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
It finds it and most of them say this:

Code:
state: UNAVAIL
status: The pool was last accessed by another system.
 
Joined
Oct 22, 2019
Messages
3,641
What happens if you try to export the pool in the command-line?
Code:
zpool export <pool_name>


You can try to "force" import the pool, and use its "id" instead of "name".
Code:
zpool import -f <long_string_of_numbers>


Not sure what happened to make it believe it's being used by "another" system.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
how about sudo? I think I saw somewhere that zfs commends need sudo to work properly.
 

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
What happens if you try to export the pool in the command-line?
Code:
zpool export <pool_name>


You can try to "force" import the pool, and use its "id" instead of "name".
Code:
zpool import -f <long_string_of_numbers>


Not sure what happened to make it believe it's being used by "another" system.
Are you sure? Since the pools are accessible in the GUI and work through things like SMB, should I really force an export?
 
Joined
Oct 22, 2019
Messages
3,641
Since the pools are accessible in the GUI and work through things like SMB, should I really force an export?

I thought it was only a GUI quirk, but since you're now mentioning can access your data like normal, this is perhaps the bug that @sretalla mentioned above:

how about sudo? I think I saw somewhere that zfs commends need sudo to work properly.


Although, I've never needed to use "sudo" to list pools and datasets.
 

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
Just tried as admin and root, no change.

Code:
admin@truenas:~$ zfs list
-bash: zfs: command not found
admin@truenas:~$ sudo zfs list
no datasets available
admin@truenas:~$ sudo su
root@truenas:/home/admin# zfs list
no datasets available
root@truenas:/home/admin# sudo zfs list
no datasets available
root@truenas:/home/admin# 
 
Joined
Oct 22, 2019
Messages
3,641
What about a generic zfs command, such as zfs --version
 
Joined
Oct 22, 2019
Messages
3,641
Did you change your paths and/or shell?
Code:
echo $PATH
 

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
root@truenas:/home/admin# echo $PATH
/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
 
Joined
Oct 22, 2019
Messages
3,641
I feel like we've seen this "bug" before, and it had to do with SCALE specifically. Can't seem to find the thread for it.

To rule something out:
Code:
/bin/zfs --version

or

Code:
/sbin/zfs --version
 

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
Code:
root@truenas:/home/admin# /bin/zfs --version
bash: /bin/zfs: No such file or directory
root@truenas:/home/admin# /sbin/zfs --version
unrecognized command '--version'
usage: zfs command args ...
where 'command' is one of the following:

        create [-p] [-o property=value] ... <filesystem>
        create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
        destroy [-rRf] <filesystem|volume>
        destroy [-rRd] <snapshot>

        snapshot [-r] [-o property=value] ... <filesystem@snapname|volume@snapname>
        rollback [-rRf] <snapshot>
        clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
        promote <clone-filesystem>
        rename <filesystem|volume|snapshot> <filesystem|volume|snapshot>
        rename -p <filesystem|volume> <filesystem|volume>
        rename -r <snapshot> <snapshot>
        list [-rH][-d max] [-o property[,...]] [-t type[,...]] [-s property] ...
            [-S property] ... [filesystem|volume|snapshot] ...

        set <property=value> <filesystem|volume|snapshot> ...
        get [-rHp] [-d max] [-o "all" | field[,...]] [-s source[,...]]
            <"all" | property[,...]> [filesystem|volume|snapshot] ...
        inherit [-rS] <property> <filesystem|volume|snapshot> ...
        upgrade [-v]
        upgrade [-r] [-V version] <-a | filesystem ...>
        userspace [-hniHp] [-o field[,...]] [-sS field] ... [-t type[,...]]
            <filesystem|snapshot>
        groupspace [-hniHpU] [-o field[,...]] [-sS field] ... [-t type[,...]]
            <filesystem|snapshot>

        mount
        mount [-vO] [-o opts] <-a | filesystem>
        unmount [-f] <-a | filesystem|mountpoint>
        share <-a | filesystem>
        unshare <-a | filesystem|mountpoint>

        send [-RDp] [-[iI] snapshot] <snapshot>
        receive [-vnF] <filesystem|volume|snapshot>
        receive [-vnF] -d <filesystem>

        allow <filesystem|volume>
        allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
            <filesystem|volume>
        allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
        allow -c <perm|@setname>[,...] <filesystem|volume>
        allow -s @setname <perm|@setname>[,...] <filesystem|volume>

        unallow [-rldug] <"everyone"|user|group>[,...]
            [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
        unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

        hold [-r] <tag> <snapshot> ...
        holds [-r] <snapshot> ...
        release [-r] <tag> <snapshot> ...

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow
 
Joined
Oct 22, 2019
Messages
3,641
Last thing from me, maybe, since I don't have a SCALE system to test with:

Code:
/usr/local/bin/zfs --version

or
Code:
/usr/local/sbin/zfs --version
 

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
Code:
root@truenas:/home/admin# /usr/local/bin/zfs --version
bash: /usr/local/bin/zfs: No such file or directory
root@truenas:/home/admin# /usr/local/sbin/zfs --version
bash: /usr/local/sbin/zfs: No such file or directory
root@truenas:/home/admin#


I remember having to do something with zdb in the past for a certain zfs command because TrueNAS keeps the 'database' of zpools somewhere that isn't the normal place, but I can't remember what that was. But I didn't have to do that on 22.12.2, so I'm not sure what changed from .2 to .3
 
Joined
Oct 22, 2019
Messages
3,641
because TrueNAS keeps the 'database' of zpools somewhere that isn't the normal place
You mean the zpool.cache file? Yes, that's located in a non-standard place with TrueNAS, under /data/zfs/zpool.cache, but it shouldn't affect "zfs" and "zpool" commands. (It affects "zdb" commands.)

@morganL is there something in the SCALE 22.12.3.3 update that might cause this behavior seen with the OP's "zfs" commands?
 
Last edited:

cyclerider

Dabbler
Joined
Nov 7, 2013
Messages
26
this might be related - I just went to run a backup to a USB drive that I have formatted with ZFS and it told me that it cannot import it because the pool is formatted using a newer ZFS version. But the USB drive's pool was updated in 22.12.2, so i don't know how it would have a newer version of ZFS.
 
Top