ZFS version clash to be aware of

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
Thought I would share this for anyone that may run into a use case that involves Linux containers (LXD) with zfs storage pools. For the past four years or so I have run many web applications and hosted instances in LXD containers with zfs storage pools. They work great, until you move them to another ZFS storage backend.
Here is a snapshot of the environment;
TrueNAS: Core 13
Vm OS: Ubuntu 20.04
LXD: Version 5.0.1
Storage driver: zfs
Virtualization stack: XCP-ng 8.2
Connected via: NFS Share

The symptoms: Next to zero disk I/O, file operations timeout, read only filesystem and container crashes.

So the workaround I found so far, is to create another virtual disk (size needed to host all containers), then let LXD consume the new drive with the btrfs driver, then migrate/copy all the containers to the new btrfs storage pool - ie: 'lxc storage create pool2 btrfs source=/dev/xvdb'

I think this issue stems from different versions of ZFS. Where TrueNAS is using zfs-2.1.9-1, and Ubuntu (in this case) is using 2.1.4-0. I read somewhere that having multiple different versions of zfs in the storage stack can cause some unpredictable behavior. There may be a way to have LXD connect to zfs pools on the NAS, but I have yet to figure that out yet. Something to be aware of.
 
Top