Hello,
I managed to get it to work with a somewhat hacky solution using the alpha version of Cobia and decided to share for anyone that is interested and wants to use it before the official release.
The Cobia nightly builds have recently been updated to use Linux kernel 6.1 which has support for 13th gen and you should be able to use it for apps (though I haven't tested that because I want to use it in a VM). You can find nightly builds here -
TrueNAS Scale Cobia nightly builds.
I have tried some from around 25 May, but they had issues with creating the initial user for Web GUI login, so I began trying at random and used the first one which was working for me, which is TrueNAS-SCALE-23.10-MASTER-20230509-024310.iso.
From there on you need to make potentially 2 or 3 "fixes" depending on your situation. I have an ASUS W680 IPMI, so the IPMI was registering as a second GPU and I didn't have the errors for needing at least 1 GPU for the system (there is another forum thread for those errors which should contain relevant information if that is your case, but I won't discuss it, since I haven't used it).
First you need to edit the python middleware that TrueNAS uses to allow for the iGPU to be isolatable. It seems that there is a check for GPUs and other PCI devices if they are consuming critical resources and if they are, they aren't allowed for isolation, but the check itself doesn't seem very logical to me. It checks if there are surrounding PCI devices (children of the parent of the validated device) that has certain keywords inside their names, but in fact the PCI device could be in a separate IOMMU group (and not contain the keywords in its name) which would allow it to be used and seems like a more reasonable check. I didn't have time to go through all the code to make the correct check, so I just made it to mark all devices as not system critical (since the check doesn't seem to be used in many places and shouldn't cause problems if used responsibly).
Generally the commands in order are - edit, delete python cache, restart the middlewared service:
sudo nano /usr/lib/python3/dist-packages/middlewared/utils/gpu.py
sudo rm /usr/lib/python3/dist-packages/middlewared/utils/__pycache__/gpu.cpython-311.pyc
sudo service middlewared restart
In the gpu,py file around line 60 after the for loop you should add:
critical = False
which makes the device not critical and allows for isolation (or you could comment out the for loop, since the variable is initialized as False)
The second "fix" is needed because the code for generating available PCI devices for a VM seems to have been changed and is in a broken state, because one of the keys in the map is a Generator instead of a bool and Python cannot make it a JSON response for the frontend.
sudo nano /usr/lib/python3/dist-packages/middlewared/plugins/vm/pci.py
sudo rm /usr/lib/python3/dist-packages/middlewared/plugins/vm/__pycache__/pci.cpython-311.pyc
sudo service middlewared restart
Around line 100 we want to add the word 'any' so that the Generator is evaluated to a bool:
Before:
'critical': (k.lower() in controller_type.lower() for k in SENSITIVE_PCI_DEVICE_TYPES),
After:
'critical': any(k.lower() in controller_type.lower() for k in SENSITIVE_PCI_DEVICE_TYPES),
After that and possibly a restart of the server you should be able to isolate the iGPU and create a VM, but don't add it as a GPU to it. Add it as a PCI device instead and everything should be working.
I have tested this with an Ubuntu 22 image and installed Jellyfin on it to validate that QuickSync is working for transcoding.