USB Passthrought VM no longer works after upgrade to 22.12.3.3 TrueNAS Scale

Bobka

Cadet
Joined
Jan 6, 2023
Messages
9
I need help with adding usb passthrough to my windows VM.

I have been running Windows VM with USB PCI Device passthrought for about a year. In the older version of TrueNAS Scale I used PCI passthrough device, picked my controller and everything worked flawlessly. I recently updated to 22.12.3.3 TrueNAS Scale and my VM image disapeared, so I reinstalled windows VM. But then i started running into this error:

Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/supervisor/supervisor.py", line 172, in start if self.domain.create() < 0: File "/usr/lib/python3/dist-packages/libvirt.py", line 1353, in create raise libvirtError('virDomainCreate() failed') libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor: 2023-08-30T05:36:29.717177Z qemu-system-x86_64: -device vfio-pci,host=0000:00:14.0,id=hostdev0,bus=pci.0,addr=0x7: vfio 0000:00:14.0: group 16 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 204, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1344, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1378, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1246, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_lifecycle.py", line 46, in start await self.middleware.run_in_thread(self._start, vm['name']) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1261, in run_in_thread return await self.run_in_executor(self.thread_pool_executor, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1258, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_supervisor.py", line 68, in _start self.vms[vm_name].start(vm_data=self._vm_from_name(vm_name)) File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/supervisor/supervisor.py", line 181, in start raise CallError('\n'.join(errors)) middlewared.service_exception.CallError: [EFAULT] internal error: qemu unexpectedly closed the monitor: 2023-08-30T05:36:29.717177Z qemu-system-x86_64: -device vfio-pci,host=0000:00:14.0,id=hostdev0,bus=pci.0,addr=0x7: vfio 0000:00:14.0: group 16 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver.

I have created another VM instance, and added usb passthrough via PCI device passthrough. But I get the same error. If I remove pci pass throught then VM starts up just fine.

I noticed that there is new option "USB Passthrought Device" in addition to "PCI Passthrought Device". When i triend configuring USB passthrough option, i run into the following issue:
1. I don't know what Controller type I have. So i tried selecting each option.
2. I don't know what Device I have because for every option I pick In Congtroler type I only get "Specify custome" option under Device.
3. For specify custom option i have to type Vendor ID and Product ID, where do i get this? - I tried typing into shell "lsusb" and I get the following "zsh: command not found: lsusb"

In Summary:
1. I don't know why PCI Passthrought device doesn't work anymore and i get the error above.
2. New option for USB passthrough doesn't give me divice options and I don't know Controller type, Vendor ID, and Product ID.

Any help with this would by highly appreciated.

I am running
- SUPERMICRO 6049P-ECR36H.
-00:14.0 USB controller: Intel Corporation C620 Series Chpset Family USB 3.0 xHCI Controller (rev09).
 

Attachments

  • 1693399024994.png
    1693399024994.png
    287.3 KB · Views: 94
  • 1693399040762.png
    1693399040762.png
    27.6 KB · Views: 98

Bobka

Cadet
Joined
Jan 6, 2023
Messages
9
Here is an update to USB Pass through issue. After weeks of trying to get rid of this error when I try to start VM:

"[EFAULT] internal error: process exited while connecting to monitor: 2023-09-27T05:58:49.295927Z qemu-system-x86_64: -device vfio-pci,host=0000:00:14.0,id=hostdev0,bus=pci.0,addr=0x7: vfio 0000:00:14.0: group 16 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver."

I rebooted TrueNAS and everything starting working just fine. I did nothing other then plug USB donle into the server. My logic was that USB throughput was not working because USB dongle was not plugged in. So once it was plugged in and the server rebooted everything was working great. I even rebooted server multiple times everything was solid.

Well today I was checking on the system and VM was off, so when i try to start it i get exact same error. I tried rebooting the server but VM doest not AutoStart and when I try to start it manually I get the error message listed above. To me it seem like something else is taking over that device and my usb dongle stops working, this is just my guess.

I truly need some help. Any help is very much appreciated.

thank you.
 

Bobka

Cadet
Joined
Jan 6, 2023
Messages
9
I went through the logs for the VM. The issues started happening after I updated NextCloud App that is running on the server. So I updated the NextCloud, rebooted and then VM with Passthrought stopped working. Below are two logs, first before the update no issues with XXX second is when the error starting appearing:
<<<<
2023-08-29T17:34:20.440438Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/mnt/RAID6_18X18TB/FTP_for_clients/testUser/Win10_22H2_EnglishInternational_x6411.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/mnt/RAID6_18X18TB/FTP_for_clients/testUser/Win10_22H2_EnglishInternational_x6411.iso': Permission denied
2023-08-29 17:34:20.493+0000: shutting down, reason=failed
2023-08-29 17:40:01.368+0000: starting up libvirt version: 7.0.0, package: 3 (Andrea Bolognani <eof@kiyuko.org> Fri, 26 Feb 2021 16:46:34 +0100), qemu version: 5.2.0Debian 1:5.2+dfsg-11+deb11u2, kernel: 5.15.107+truenas, hostname: localhost
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
HOME=/var/lib/libvirt/qemu/domain-5-1_License_Server \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-5-1_License_Server/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-5-1_License_Server/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-5-1_License_Server/.config \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-system-x86_64 \
-name guest=1_License_Server,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-1_License_Server/master-key.aes \
-blockdev '{"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/1_License_Server_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-i440fx-5.2,accel=kvm,usb=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,memory-backend=pc.ram \
-cpu qemu64 \
-m 3072 \
-object memory-backend-ram,id=pc.ram,size=3221225472 \
-overcommit mem-lock=on \
-smp 2,sockets=2,dies=1,cores=1,threads=1 \
-uuid a2d45d33-91fa-4318-acac-43b3e8fd417d \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=38,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-shutdown \
-boot strict=on \
-device nec-usb-xhci,id=usb,bus=pci.0,addr=0x4 \
-device ahci,id=sata0,bus=pci.0,addr=0x5 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 \
-blockdev '{"driver":"file","filename":"/mnt/RAID6_18X18TB/FTP_for_clients/testUser/Win10_22H2_EnglishInternational_x6411.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-cd,bus=sata0.0,drive=libvirt-2-format,id=sata0-0-0,bootindex=1 \
-blockdev '{"driver":"host_device","filename":"/dev/zvol/RAID6_18X18TB/VM/License_Server-oe8zzq","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-hd,bus=sata0.1,drive=libvirt-1-format,id=sata0-0-1,bootindex=2,write-cache=on \
-netdev tap,fd=43,id=hostnet0 \
-device e1000,netdev=hostnet0,id=net0,mac=00:a0:98:51:a1:51,bus=pci.0,addr=0x3 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=44,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-vnc 0.0.0.0:0 \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,xres=1024,yres=768,bus=pci.0,addr=0x2 \
-device vfio-pci,host=0000:00:14.0,id=hostdev0,bus=pci.0,addr=0x7 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/2 (label charserial0)
>>>>
When things broke:
<<<<
2023-08-29T17:40:01.543268Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/mnt/RAID6_18X18TB/FTP_for_clients/testUser/Win10_22H2_EnglishInternational_x6411.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/mnt/RAID6_18X18TB/FTP_for_clients/testUser/Win10_22H2_EnglishInternational_x6411.iso': Permission denied
2023-08-29 17:40:01.593+0000: shutting down, reason=failed
2023-08-29 17:44:49.924+0000: starting up libvirt version: 7.0.0, package: 3 (Andrea Bolognani <eof@kiyuko.org> Fri, 26 Feb 2021 16:46:34 +0100), qemu version: 5.2.0Debian 1:5.2+dfsg-11+deb11u2, kernel: 5.15.107+truenas, hostname: localhost
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
HOME=/var/lib/libvirt/qemu/domain-6-1_License_Server \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-6-1_License_Server/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-6-1_License_Server/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-6-1_License_Server/.config \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-system-x86_64 \
-name guest=1_License_Server,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-1_License_Server/master-key.aes \
-blockdev '{"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/1_License_Server_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-i440fx-5.2,accel=kvm,usb=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,memory-backend=pc.ram \
-cpu qemu64 \
-m 3072 \
-object memory-backend-ram,id=pc.ram,size=3221225472 \
-overcommit mem-lock=on \
-smp 2,sockets=2,dies=1,cores=1,threads=1 \
-uuid a2d45d33-91fa-4318-acac-43b3e8fd417d \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=38,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-shutdown \
-boot strict=on \
-device nec-usb-xhci,id=usb,bus=pci.0,addr=0x4 \
-device ahci,id=sata0,bus=pci.0,addr=0x5 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 \
-blockdev '{"driver":"file","filename":"/mnt/RAID6_18X18TB/FTP_for_clients/testUser/Win10_22H2_EnglishInternational_x6411.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-cd,bus=sata0.0,drive=libvirt-2-format,id=sata0-0-0,bootindex=1 \
-blockdev '{"driver":"host_device","filename":"/dev/zvol/RAID6_18X18TB/VM/License_Server-oe8zzq","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-hd,bus=sata0.1,drive=libvirt-1-format,id=sata0-0-1,bootindex=2,write-cache=on \
-netdev tap,fd=43,id=hostnet0 \
-device e1000,netdev=hostnet0,id=net0,mac=00:a0:98:51:a1:51,bus=pci.0,addr=0x3 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=44,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-vnc 0.0.0.0:0 \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,xres=1024,yres=768,bus=pci.0,addr=0x2 \
-device vfio-pci,host=0000:00:14.0,id=hostdev0,bus=pci.0,addr=0x7 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/2 (label charserial0)
2023-08-29T17:44:51.110810Z qemu-system-x86_64: -device vfio-pci,host=0000:00:14.0,id=hostdev0,bus=pci.0,addr=0x7: vfio 0000:00:14.0: group 16 is not viable
Please ensure all devices within the iommu_group are bound to their vfio bus driver.
2023-08-29 17:44:51.370+0000: shutting down, reason=failed

>>>>
 
Top