Scale on proxmox with LSI passthrough problem

fonze98

Dabbler
Joined
Oct 9, 2021
Messages
23
I am attempting to get a virtualized install of TrueNAS Scale working on proxmox with my LSI IT mode controller passed through but my Scale VM never sees the drives on the controller. I am aware this is not a supported install method but am also aware that it is possible and wanted to test some stuff before deploying to my physical install

When I start the Scale VM I get the proxmox splash screen then two error lines stating `error terminal ‘serial’ not found repeated it goes on to the grub selection screen and on the install screen I do not see any of the drives attached to the LSI card just the LVM drive. There are multiple LSI 3008 chipsets on this board if that makes a difference. If anyone has any ideas on what I can try to get this working I am all ears. Below is what I have currently done.

I followed the proxmox guid at https ://pve.proxmox.com/wiki/Pci_passthrough
current grub settings
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

The ouput of
dmesg | grep -e DMAR -e IOMMU' is [ 1.337815] pci 0000:c0:00.2: AMD-Vi: IOMMU performance counters supported
[ 1.337835] pci 0000:80:00.2: AMD-Vi: IOMMU performance counters supported
[ 1.337853] pci 0000:40:00.2: AMD-Vi: IOMMU performance counters supported
[ 1.337872] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 1.341284] pci 0000:c0:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 1.341291] pci 0000:80:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 1.341295] pci 0000:40:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 1.341299] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 1.342328] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[ 1.342351] perf/amd_iommu: Detected AMD IOMMU #1 (2 banks, 4 counters/bank).
[ 1.342359] perf/amd_iommu: Detected AMD IOMMU #2 (2 banks, 4 counters/bank).
[ 1.342365] perf/amd_iommu: Detected AMD IOMMU #3 (2 banks, 4 counters/bank).`

the contents of /etc/modules is
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

the output of
dmesg | grep 'remapping'
is
[ 1.341303] AMD-Vi: Interrupt remapping enabled

the output of
find /sys/kernel/iommu_groups/ -type l
looks normal to me
/sys/kernel/iommu_groups/39/devices/0000:00:14.3
/sys/kernel/iommu_groups/39/devices/0000:00:14.0
/sys/kernel/iommu_groups/29/devices/0000:85:00.0
/sys/kernel/iommu_groups/0/devices/0000:c3:00.0
/sys/kernel/iommu_groups/0/devices/0000:c2:00.0
/sys/kernel/iommu_groups/0/devices/0000:c0:01.0
/sys/kernel/iommu_groups/0/devices/0000:c1:00.0
/sys/kernel/iommu_groups/0/devices/0000:c2:09.0
/sys/kernel/iommu_groups/0/devices/0000:c0:01.1
/sys/kernel/iommu_groups/0/devices/0000:c5:00.0
/sys/kernel/iommu_groups/0/devices/0000:c2:08.0
/sys/kernel/iommu_groups/57/devices/0000:47:00.0
/sys/kernel/iommu_groups/19/devices/0000:80:07.1

the vm settings are
balloon: 0
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
hostpci0: 0000:c5:00.0,rombar=0
ide2: local:iso/TrueNAS-SCALE-22.12.0.iso,media=cdrom,size=1692208K
memory: 8192
meta: creation-qemu=7.1.0,ctime=1672026924
name: scale01
net0: virtio=8E:22:15:E1:99:6B,bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-0,discard=on,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=8a8e34a3-a38f-47d4-a6ff-9050fa96598f
sockets: 1
vmgenid: 1ea90faa-d98c-4b6e-bf6b-6686c3fb9a82

I have tried it with both i440fx and q35 machine types

My setup is
Proxmox 7.3-3
Fractal Design Define 7XL
SuperMicro H12SSL-I
EPYC 7542+ 2.9 Ghz
8x Crucial 32G CT32G4RFD432A
1x LSI 9300-16i
1x Intel X520-DA2 10G network card
 

fonze98

Dabbler
Joined
Oct 9, 2021
Messages
23
update
with new grub settings
`GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt pcie_acs_override=downstream"`
it now has the LSI controllers in different groups but I still do not see the drives in the install menu. Just to be sure I went ahead with the install and still do not see them inside the scale vm however running a lspci command shows the card
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:05.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:10.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
00:12.0 Ethernet controller: Red Hat, Inc. Virtio network device
00:1e.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:1f.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
01:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI

do I need to pass the disks in individually also?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I'm going to try to help you here because, like you, I do run virtualized TrueNAS VM on Proxmox successfully for the last 3 months or so. Do note that I am using Intel Xeon Silver instead of AMD EPYC.
do I need to pass the disks in individually also?
No.

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

the vm settings are
balloon: 0
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
hostpci0: 0000:c5:00.0,rombar=0
ide2: local:iso/TrueNAS-SCALE-22.12.0.iso,media=cdrom,size=1692208K
memory: 8192
meta: creation-qemu=7.1.0,ctime=1672026924
name: scale01
net0: virtio=8E:22:15:E1:99:6B,bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-0,discard=on,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=8a8e34a3-a38f-47d4-a6ff-9050fa96598f
sockets: 1
vmgenid: 1ea90faa-d98c-4b6e-bf6b-6686c3fb9a82
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
Settings are all the same, except:
cores: 4
memory: 32768
numa: 0
ostype: other

This is probably not the issue, but you're allocating 8 cores, but only 8 GiB of RAM? It should really be reversed. TrueNAS/ZFS needs that RAM a lot more than it needs cores.

I have tried it with both i440fx and q35 machine types
I use i440fx.

Looking at the settings, I don't really see any obvious problems in your Proxmox config file.
The only differences between our systems (see my signature) in no particular order are:
  • Xeon vs EPYC.
  • CORE (BSD) vs SCALE (Linux).
  • SAS2308 vs SAS3008.
  • numa 0 vs 1.
Sorry that wasn't much, but hope it helps.
 

fonze98

Dabbler
Joined
Oct 9, 2021
Messages
23
Yes I did. It seems that the motherboard I am using has a built in LSI controller as well as the one I wanted to pass through and I had selected the wrong one. I had drives attached to both controllers so expected to see one or the other at the time, but it seems that the onboard controller did not like being passed through for some reason. Once I switched it to the other one everything worked fine. I ended up following this https://pve.proxmox.com/wiki/PCI(e)_Passthrough
 

fonze98

Dabbler
Joined
Oct 9, 2021
Messages
23
Looks like I had to add "pcie_acs_override=downstream" to my GRUB_CMDLINE_LINUX_DEFAULT in order to separate out the two LSI devices so I could see both as separate IOMMU devices also
 
Joined
Feb 16, 2023
Messages
2
My issue was that I have SAS 3 drives, and the cables I am using are sending 3.3v on the 3rd pin so I just pit some tape on the pin and it solved my issue.
 
Top