Multicast subscription makes unrelated bhyve consume excessive CPU

aix

Dabbler
Joined
Oct 24, 2022
Messages
13
I have a TrueNAS-13.0-U3 box with a jail and a VM.
  • The jail runs udpxy that periodically joins a multicast group (an MPEG transport stream carrying one TV channel).
  • The VM runs Debian that's currently always 100% idle.
When the jail joins the multicast group, the VM's bhyve process starts consuming ~10% of the server's i5-8259U CPU. This seems grossly excessive, especially given that the multicast traffic has nothing to do with the VM.

The the jail leaves the group, bhyve goes back to 0% CPU.

The jail and the VM have their own IP addresses (IPV4 DHCP, no IPV6) but are backed by the same physical interface (the box only has a single NIC).

I've been trying to get to the bottom of this for some time, but haven't been able to make a lot of progress. I'd appreciate suggestions for how best to troubleshoot this.

Happy to provide more details about my exact setup -- just let me know what would be useful. Thanks in advance!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have a TrueNAS-13.0-U3 box with a jail and a VM.
  • The jail runs udpxy that periodically joins a multicast group (an MPEG transport stream carrying one TV channel).
  • The VM runs Debian that's currently always 100% idle.
When the jail joins the multicast group, the VM's bhyve process starts consuming ~10% of the server's i5-8259U CPU. This seems grossly excessive, especially given that the multicast traffic has nothing to do with the VM.

The the jail leaves the group, bhyve goes back to 0% CPU.

The jail and the VM have their own IP addresses (IPV4 DHCP, no IPV6) but are backed by the same physical interface (the box only has a single NIC).

I've been trying to get to the bottom of this for some time, but haven't been able to make a lot of progress. I'd appreciate suggestions for how best to troubleshoot this.

Happy to provide more details about my exact setup -- just let me know what would be useful. Thanks in advance!

When your multicast is running on physical ethernet hardware, there is hardware assist and optimization for picking out multicast and handing that off to the host network stack.

There is almost certainly some emulation inefficiency going on, quite possibly because multicast is relatively unusual. I don't know anything about "udpxy" but your mention of an MPEG transport stream with a television channel suggests that there could be a lot of packets being thrown around. bhyve is a relatively young hypervisor and the stack of bhyve plus the BSD networking stack bits such as the ethernet bridge may simply not be optimized for your use case.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Here's a detail that's likely to be relevant: Are you using VNET in the jails? What happens if you flip the switch (it's... more complicated than that, since you'd need to redo the network setup for the jails)?
The jail and the VM have their own IP addresses (IPV4 DHCP, no IPV6)
This kind of suggests that you are, but it could be hacked together on the host's side of things.
 

aix

Dabbler
Joined
Oct 24, 2022
Messages
13
Thanks for the suggestions/question. Here's the requested info.
  • The jail is set up with VNET, BFP and DHCP4.
  • The VM is using a VirtIO NIC device.
On the host this looks like this:

em0: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4810099<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,VLAN_HWFILTER,NOMAP>
ether xx:xx:xx:xx:fd:91
inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
nd6 options=9<PERFORMNUD,IFDISABLED>
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether xx:xx:xx:xx:fe:41
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: vnet3 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 5 priority 128 path cost 2000000
member: vnet0.5 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 10 priority 128 path cost 2000
groups: bridge
nd6 options=9<PERFORMNUD,IFDISABLED>
vnet0.5: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
description: associated with jail: udpxy as nic: epair0b
options=8<VLAN_MTU>
ether xx:xx:xx:xx:f0:04
hwaddr xx:xx:xx:xx:64:0a
groups: epair
media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
status: active
nd6 options=9<PERFORMNUD,IFDISABLED>
vnet3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=80000<LINKSTATE>
ether xx:xx:xx:xx:78:24
hwaddr xx:xx:xx:xx:27:28
groups: tap
media: Ethernet autoselect
status: active
nd6 options=9<PERFORMNUD,IFDISABLED>
Opened by PID 6194


There's a bunch of other vnet interfaces, all connected to bridge0, which I'm omitting for brevity as I don't think they play a role in this.

your mention of an MPEG transport stream with a television channel suggests that there could be a lot of packets being thrown around

I just measured and mutlicast rates are on the order of 1500 packets/second.

I'm wondering:
  1. if there's a way to get an insight into where bhyve is spending its time?
    1. perhaps it has some configuration parameters that could help.
  2. if there's a way to stop the multicast traffic ever reaching bhyve?
    1. moving udpxy to a separate box is possible but undesirable.
    2. moving udpxy onto its own physical interface is not feasible (the box is a NUC and has no free ports other than USB and Thunderbolt).
 

aix

Dabbler
Joined
Oct 24, 2022
Messages
13
(Please pardon the formatting. I didn't discover the preview function until after hitting Post, and it doesn't look like there's a way to edit my post?)
 
Top