SR-IOV on 21.04?

sampah

Cadet
Joined
May 29, 2021
Messages
4
Looking to switch from Proxmox. I need SR-IOV functionality (running an Intel i350) so that my different VMs can be tagged to different VLANs (DMZ, internal, etc.) without significant loss of performance. I see from the development notes that GPU passthrough is supported. Is there similar support for SR-IOV? The way I see it there are 4 levels:
  1. Officially supported through the GUI. I just declare somewhere I want the i350 NIC to be split in X vfs (say 2). Then (possibly upon a reboot) I can select the vfs to attach to my VM (under the hood the middleware will probably be using sysfs)
  2. Supported through CLI. Like 1. but with manual commands thrown in (as I can see GPU passthrough support is like).
  3. Unsupported but functional. E.g. sysfsutils is available, I can configure via /etc/sysfs.conf and have it persist outside of middleware management.
  4. Unsupported and broken. If I attempt to manually configure via /etc/sysfs.conf the middleware will overwrite it upon reboot and destroy my work.
I'm guessing we're currently at 4., but I'm hoping for at least 3. If it is at 4. or 3., will there be any plans to support such functionality? I get that SR-IOV, like GPU passthrough, breaks hyper-convergence and hyper-scaling (since VMs are no longer ephemeral and transferable across nodes), so maybe the large-scale business case is weak (mostly relevant to the enthusiast space), and iX-Systems says "this is not profitable for us to introduce or maintain" - if that's the case, let me know and I'll pass on TrueNAS Scale.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
You can still do this with normal networking, the performance loss is minimal.

anyhow: it's either option 3 or 4 (depending if it's overridden at reboot or just every update) and won't be added with the first release afaik. Maybe the one after, but we are talking 2022 in that case.

I see a lot of enterprise usecase for things like SR-IOV, but just performance loss and using VLAN's isn't one of them, unless you want to be talking about 100G+ NICs ofc.
 

sampah

Cadet
Joined
May 29, 2021
Messages
4
Hi thanks for the response. Do you have benchmark numbers to substantiate that there isn't significant overhead? Numbers here indicate that for a 10GbE NIC the performance loss is 10Gb -> 3.4Gb. That's with 1 vNIC. Performance loss will likely be even higher if e.g. one has 3-4 vNICs off a single NIC.

I don't suppose TrueNAS Scale supports tweaking VirtIO options for vNIC performance optimization either. In any case, thank you for your response.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Numbers here indicate that for a 10GbE NIC the performance loss is 10Gb -> 3.4Gb. That's with 1 vNIC. Performance loss will likely be even higher if e.g. one has 3-4 vNICs off a single NIC.
No they do not:
"network performance can also achieve 9.4 Gbps "
" otherwise, poor performance will be 3.6 Gbps. "

If you set it up badly you get 3.6 otherwise you get the "full" 9.4

I never had issues with maxing out 10gb virtio network interfaces on SCALE. I get the feeling you just read things on the internet before testing it out yourself. Thats not very practical.

How about:
- First try it out
- Then report back if something behaves sub-optimal.

Instead of already requiring all sorts of changes "under the hood" which might not even be needed at all, just based on some bs on the internet that isn't even related to SCALE?
Did you even check how SCALE created the virtio interfaces, before requiring to change them...
 

sampah

Cadet
Joined
May 29, 2021
Messages
4
Hi, thank you for your response to a newcomer to the community. No I have not investigated how TrueNAS Scale implements VirtIO under the hood, because I recall when reading up about GPU passthrough that TrueNAS Scale does not intend for users to be able to tweak anything that they provide using the middleware outside of the middleware.

In other words, if I find that TrueNAS Scale's implementation of VirtIO is suboptimal, it is expected that I would not be able to do anything about it. I am not expecting to be able to change VirtIO settings beyond what TrueNAS provides (if any settings exist in TrueNAS to customize VirtIO - please let me know if there are).

VirtIO certainly has advantages over traditional NIC emulation (e.g. of E1000 or similar), but SR-IOV is by design going to have lower overhead. I was merely checking whether such functionality was exposed in TrueNAS, which you have graciously pointed out in your previous answer is either 3. or 4. (probably 4.), and in any case, any workaround I may use is liable to break if TrueNAS does anything tangentially related on the VM networking side.

Which is fair. I'm not saying TrueNAS has to implement SR-IOV - if it isn't on the roadmap, then it isn't. And if I need SR-IOV then I'll have to go elsewhere. In any case, thank you for engaging with me.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
You are not able to tweak it, because it should already be setup in an optimal fasion in out-of-the-box.
 

anthr76

Cadet
Joined
Jun 3, 2021
Messages
1
I would also like to see SR-IOV. All the things I've tried never gets me VFs. Scale seems to enable the right settings with grub..
 

darkmode

Dabbler
Joined
Aug 17, 2021
Messages
12
I would also like to see SR-IOV. All the things I've tried never gets me VFs. Scale seems to enable the right settings with grub..


I'm curious as to what you've tried since I'm about to embark on installing SCALE to a test server. Did any of your attempts include steps such as these? (my existing Linux system)

Grub:
GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt"

Command line:
# echo '7' > /sys/class/net/enp2s0f0/device/sriov_numvfs

For persistence between reboots:
# cat /etc/udev/rules.d/enp2s0f0.rules ACTION=="add", SUBSYSTEM=="net", ENV{ID_NET_DRIVER}=="igb", ATTR{device/sriov_numvfs}="7"
 
Top