Best way to make a feature request? LXC + QAT?

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Yes that's correct. The only issue is I could not get QAT acceleration to work on the root ZFS device since the ZFS modules load before the QAT module is loaded. But I re-load the module after booting and everything works as expected.
Which module did you reload actually, the ZFS or QAT one? I expect ZFS?

There is an easier option, thats less risky:
Create a cron with this:
Code:
echo 0 > /sys/module/zfs/parameters/zfs_qat_compress_disable
echo 0 > /sys/module/zfs/parameters/zfs_qat_encrypt_disable
echo 0 > /sys/module/zfs/parameters/zfs_qat_checksum_disable



I looked into the UX side of things:
- We do not need any UI changes for QAT support

Technically if at least the ZFS module is build with QAT support, we can quite easily just maintain a community script to add QAT drivers/libs and add above code into a cronjob.

*edit*
That being said, it looks like QAT compression can be considered unstable or legacy, as the new drivers doesn't even support it at release:

It gets worse, it seems Intel (in all their wisdom(tm) ) has decided the new QAT lib is not going to be backwardscompatible. Which basically would mean it isn't very mergeable to be honest.
 
Last edited:

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Update:

I've ended my efforts testing to implement QAT on SCALE.

Here are my highlights:
- Intel just released "new" QAT drivers for the older hardware, which for some ducking retarded reason... does still not even include the basic changes needed for kernel 5.6, 5.7 or 5.8 support.
- Intel also decided to create new drivers for every hardware release (which cover about 70% of the same code, so totally needless frustration), which means we can only support 1 hardware version (because we can only build ZFS based on one QAT driver)
- In the newest drivers intel even added more changes that would break support on newer kernels it seems.
- It is possible to patch the drivers for kernel 5.6, 5.7 and 5.8 support, but that would mean IX has to maintain Intels driver package. Which is a HUGE support burden.

From the above highlights I conclude the following:
Implementing QAT would mean that IX has to pick one hardware revision to support and maintain the QAT drivers and library themselves.
This is not efficient, not sane and just not a good idea at all.
 

ajgnet

Explorer
Joined
Jun 16, 2020
Messages
65
A quick update:

OpenZFS 2.0.0-rc3 includes QAT support for newer kernels:
 

FosCo

Dabbler
Joined
Sep 20, 2020
Messages
23
Sad news, as I started to love SCALE but also bought an Atom 3758 base to use QAT. Thanks for the effort ornias, if there is anything to test, I'll be happy to help!
 

rudds

Dabbler
Joined
Apr 17, 2018
Messages
34
LXC is built into SCALE, but is not necessarily exposed to users. The goal is just reliable Docker and Kubernetes capability. Is there a specific capability you see as a "must have"?

Could you expand on this comment, particularly how much freedom the user will have to use LXC from the command line and whether it will be "safe" to do so in the initial SCALE release? I'm particularly curious whether your LXC config and containers would be persistent across SCALE updates and left untouched during an upgrade.

I have need of a full system container analogous to BSD jails, and I don't need any GUI features and am happy to manage everything in a shell (which I do already with jails in 11.3), so if that's possible and safe I should be satisfied. Though, obviously in current FreeNAS there are a lot of cases where tinkering with underlying features on the host system that aren't exposed in the GUI is heavily discouraged, and those changes are wiped out during an update, so I'm curious if LXC will be similar when SCALE first launches.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Currently:
SCALE is mostly Debian with a GUI layer and automation for all features integrated into SCALE.
The features that are not related to SCALE are not going to get interferance from SCALE.

I've not had issues with persistance of most not-scale-related config files yet.
Maybe @Kris Moore or @morganL want to make a shortlist of which system paths are and are-not persistant across updates?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Could you expand on this comment, particularly how much freedom the user will have to use LXC from the command line and whether it will be "safe" to do so in the initial SCALE release? I'm particularly curious whether your LXC config and containers would be persistent across SCALE updates and left untouched during an upgrade.

I have need of a full system container analogous to BSD jails, and I don't need any GUI features and am happy to manage everything in a shell (which I do already with jails in 11.3), so if that's possible and safe I should be satisfied. Though, obviously in current FreeNAS there are a lot of cases where tinkering with underlying features on the host system that aren't exposed in the GUI is heavily discouraged, and those changes are wiped out during an update, so I'm curious if LXC will be similar when SCALE first launches.

Rudds, As Ornias indicated, the issues come when TrueNAS SCALE and 3rd party tools conflict. I think we are better turning a needed use-case into a feature request. We can the respond positively about how it will or won't be supported.

For example, if LXC functionality is needed, it might be that we are best doing it through the Kubernetes tools: https://discuss.kubernetes.io/t/lxe-released-a-kubernetes-integration-of-lxc-lxd/3022

Or maybe, it's better that a user can disable SCALE Kubernetes from running and then has control of the OS container features including LXC?

I'd suggest trying the SCALE Kubernetes implementation (or disabling it) and then documenting what is needed for your specific use-case. We do not yet know all the things that will be requested and will be implemented. We are still in learning mode.
 

Jip-Hop

Contributor
Joined
Apr 13, 2021
Messages
112
No LXC on SCALE yet but perhaps using systemd-nspawn could be a viable alternative. It's already included with SCALE (although not officially supported afaik). I'm experimenting with it to create fully persistent Debian 'jails' with full access to all files on the NAS via bind mounts. No modifications to the host OS required!
Thanks this is helpful. I need LXC to have multiple "VMs" access a shared Nvidia card.
I think this would be possible with systemd-nspawn, although you'd have to ensure the nvidia driver in the systemd-nspawn container matches the version that the TrueNAS SCALE host is using.
 
Top