FreeNAS on Proxmox VM on dell R720xd H310 mini mono (LSI 2008 IT mode)

pokworld

Cadet
Joined
Nov 17, 2019
Messages
5
Hello,

What I am trying to do is to pass through the controller so FreeNAS could easily work over the disk. I am going to try FreeNAS on VM for a while. If it works fine, I will be using it as file sever . I am using it on VM so to not have second server.

HDDs pass through the proxmox, so the controller firmware is OK. Proxmox is installed on ZFS mirror made over proxmox on cheap SSDs (cheap intel SSDs, I am not happy with them so far).
VMs are installed to 1 TB ZFS mirror on SSDs again. Those are expensive and work really well.
VM for the FreeNAS has those settings:
Opera Snapshot_2019-11-17_221107_192.168.1.10.png


Grub config is like that:
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=4 quiet intel_iommu=on"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"

As soon as I start VM, the whole Proxmox crashes. If I remove PCI Device 03:00 which is the LSI controller, FreeNAS stats but I cannot see the HDDs.
So far I could add single HDD and I could add all the HDDs I want to use as storage: 2x8TB, 2x6TB, 2x3TB. If I cant pass-through the controller, I will not be able to hotswap.
I am missing something and don't know what it is.
"#lspci -nn|grep SAS" returns: 03:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)
03:00 means all functions. I have tried 03:00.0 too. no luck so far.

I am thinking if I can pass-through HDDs one by one, there should not be an hardware issue. Nevertheless I consider trying H710 mini mono IT mode (yes, it is possible).

CPUs I have is 2 x Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz and RAM is 128 GB.

I believe I did everything as it should be. Probably missing something and cant figure it out. May be something with the vfio driver.

Anybody with suggestions what else to do?

P.S.
Please don't start with "FreeNAS is not for VM". I have read a lot and I am just about to but another machine just for the FreeNAS, but I would really prefer to have single server only. I don't want to manage multiple servers.
 

sammael

Explorer
Joined
May 15, 2017
Messages
76
Hi, I too am running freenas in proxmox with 2x lsi 2008 passthrough, I recently added intel 350 4x gigabit adapter to passthrough to the vm and since that card is pcie, I clicked that option (pcie=1) and had similar issues as you describe. After removing the option it all is fine and dandy. After some googling apparently it's mainly important for gpu passthrough.

So my suggestion is only to select the All functions checkbox when adding pci passthrough device in proxmox, so that in gui it only shows 03:00 alone.

Good luck!
 

pokworld

Cadet
Joined
Nov 17, 2019
Messages
5
Thank you for your reply.
I just did it. Not only the VM, but the Host (proxmox) crashed.
 

sammael

Explorer
Joined
May 15, 2017
Messages
76
I'm in no way an expert so can't really offer any in depth advice, the only other thing i can think of is it might have to do with iommu groups, for example on my supermicro board I have 3 pcie slots, but 2 of them end up in same iommu group. I have no idea how to separate them or if it's possible, but if I try to assign 2 devices from same iommu group to different vm's I get all kind of trouble including freezing/crashing.

You can see the iommu group in the gui when you're adding the pci device. So maybe try to swap the pcie cards around physically so they end up in different iommu group.

Other than that I have nothing, sorry.
 

pokworld

Cadet
Joined
Nov 17, 2019
Messages
5
# find /sys/kernel/iommu_groups/ -type l
returns:
/sys/kernel/iommu_groups/17/devices/0000:03:00.0
/sys/kernel/iommu_groups/45/devices/0000:7f:10.2
/sys/kernel/iommu_groups/45/devices/0000:7f:10.0
/sys/kernel/iommu_groups/45/devices/0000:7f:10.7
/sys/kernel/iommu_groups/45/devices/0000:7f:10.5
/sys/kernel/iommu_groups/45/devices/0000:7f:10.3
/sys/kernel/iommu_groups/45/devices/0000:7f:10.1
/sys/kernel/iommu_groups/45/devices/0000:7f:10.6
/sys/kernel/iommu_groups/45/devices/0000:7f:10.4
/sys/kernel/iommu_groups/35/devices/0000:40:03.2
/sys/kernel/iommu_groups/7/devices/0000:00:11.0
/sys/kernel/iommu_groups/25/devices/0000:3f:0c.2
/sys/kernel/iommu_groups/25/devices/0000:3f:0c.0
/sys/kernel/iommu_groups/25/devices/0000:3f:0c.7
/sys/kernel/iommu_groups/25/devices/0000:3f:0c.3
/sys/kernel/iommu_groups/25/devices/0000:3f:0c.1
/sys/kernel/iommu_groups/25/devices/0000:3f:0c.6
/sys/kernel/iommu_groups/15/devices/0000:01:00.0
/sys/kernel/iommu_groups/15/devices/0000:01:00.1
/sys/kernel/iommu_groups/43/devices/0000:7f:0e.1
/sys/kernel/iommu_groups/43/devices/0000:7f:0e.0
/sys/kernel/iommu_groups/33/devices/0000:40:02.0
/sys/kernel/iommu_groups/5/devices/0000:00:03.0
/sys/kernel/iommu_groups/23/devices/0000:3f:0a.3
/sys/kernel/iommu_groups/23/devices/0000:3f:0a.1
/sys/kernel/iommu_groups/23/devices/0000:3f:0a.2
/sys/kernel/iommu_groups/23/devices/0000:3f:0a.0
/sys/kernel/iommu_groups/13/devices/0000:00:1e.0
/sys/kernel/iommu_groups/41/devices/0000:7f:0c.6
/sys/kernel/iommu_groups/41/devices/0000:7f:0c.2
/sys/kernel/iommu_groups/41/devices/0000:7f:0c.0
/sys/kernel/iommu_groups/41/devices/0000:7f:0c.7
/sys/kernel/iommu_groups/41/devices/0000:7f:0c.3
/sys/kernel/iommu_groups/41/devices/0000:7f:0c.1
/sys/kernel/iommu_groups/31/devices/0000:3f:13.0
/sys/kernel/iommu_groups/31/devices/0000:3f:13.5
/sys/kernel/iommu_groups/31/devices/0000:3f:13.1
/sys/kernel/iommu_groups/31/devices/0000:3f:13.6
/sys/kernel/iommu_groups/31/devices/0000:3f:13.4
/sys/kernel/iommu_groups/3/devices/0000:00:02.0
/sys/kernel/iommu_groups/21/devices/0000:3f:08.0
/sys/kernel/iommu_groups/11/devices/0000:00:1c.7
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/38/devices/0000:7f:09.0
/sys/kernel/iommu_groups/28/devices/0000:3f:0f.6
/sys/kernel/iommu_groups/28/devices/0000:3f:0f.4
/sys/kernel/iommu_groups/28/devices/0000:3f:0f.2
/sys/kernel/iommu_groups/28/devices/0000:3f:0f.0
/sys/kernel/iommu_groups/28/devices/0000:3f:0f.5
/sys/kernel/iommu_groups/28/devices/0000:3f:0f.3
/sys/kernel/iommu_groups/28/devices/0000:3f:0f.1
/sys/kernel/iommu_groups/18/devices/0000:08:00.0
/sys/kernel/iommu_groups/46/devices/0000:7f:11.0
/sys/kernel/iommu_groups/36/devices/0000:40:05.0
/sys/kernel/iommu_groups/36/devices/0000:40:05.2
/sys/kernel/iommu_groups/8/devices/0000:00:16.0
/sys/kernel/iommu_groups/8/devices/0000:00:16.1
/sys/kernel/iommu_groups/26/devices/0000:3f:0d.3
/sys/kernel/iommu_groups/26/devices/0000:3f:0d.1
/sys/kernel/iommu_groups/26/devices/0000:3f:0d.6
/sys/kernel/iommu_groups/26/devices/0000:3f:0d.2
/sys/kernel/iommu_groups/26/devices/0000:3f:0d.0
/sys/kernel/iommu_groups/16/devices/0000:02:00.0
/sys/kernel/iommu_groups/16/devices/0000:02:00.1
/sys/kernel/iommu_groups/44/devices/0000:7f:0f.6
/sys/kernel/iommu_groups/44/devices/0000:7f:0f.4
/sys/kernel/iommu_groups/44/devices/0000:7f:0f.2
/sys/kernel/iommu_groups/44/devices/0000:7f:0f.0
/sys/kernel/iommu_groups/44/devices/0000:7f:0f.5
/sys/kernel/iommu_groups/44/devices/0000:7f:0f.3
/sys/kernel/iommu_groups/44/devices/0000:7f:0f.1
/sys/kernel/iommu_groups/34/devices/0000:40:03.0
/sys/kernel/iommu_groups/6/devices/0000:00:05.2
/sys/kernel/iommu_groups/6/devices/0000:00:05.0
/sys/kernel/iommu_groups/24/devices/0000:3f:0b.0
/sys/kernel/iommu_groups/24/devices/0000:3f:0b.3
/sys/kernel/iommu_groups/14/devices/0000:00:1f.0
/sys/kernel/iommu_groups/42/devices/0000:7f:0d.0
/sys/kernel/iommu_groups/42/devices/0000:7f:0d.3
/sys/kernel/iommu_groups/42/devices/0000:7f:0d.1
/sys/kernel/iommu_groups/42/devices/0000:7f:0d.6
/sys/kernel/iommu_groups/42/devices/0000:7f:0d.2
/sys/kernel/iommu_groups/32/devices/0000:40:01.0
/sys/kernel/iommu_groups/4/devices/0000:00:02.2
/sys/kernel/iommu_groups/22/devices/0000:3f:09.0
/sys/kernel/iommu_groups/12/devices/0000:00:1d.0
/sys/kernel/iommu_groups/40/devices/0000:7f:0b.3
/sys/kernel/iommu_groups/40/devices/0000:7f:0b.0
/sys/kernel/iommu_groups/30/devices/0000:3f:11.0
/sys/kernel/iommu_groups/2/devices/0000:00:01.1
/sys/kernel/iommu_groups/20/devices/0000:09:01.0
/sys/kernel/iommu_groups/10/devices/0000:00:1c.0
/sys/kernel/iommu_groups/39/devices/0000:7f:0a.0
/sys/kernel/iommu_groups/39/devices/0000:7f:0a.3
/sys/kernel/iommu_groups/39/devices/0000:7f:0a.1
/sys/kernel/iommu_groups/39/devices/0000:7f:0a.2
/sys/kernel/iommu_groups/29/devices/0000:3f:10.0
/sys/kernel/iommu_groups/29/devices/0000:3f:10.7
/sys/kernel/iommu_groups/29/devices/0000:3f:10.5
/sys/kernel/iommu_groups/29/devices/0000:3f:10.3
/sys/kernel/iommu_groups/29/devices/0000:3f:10.1
/sys/kernel/iommu_groups/29/devices/0000:3f:10.6
/sys/kernel/iommu_groups/29/devices/0000:3f:10.4
/sys/kernel/iommu_groups/29/devices/0000:3f:10.2
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/19/devices/0000:09:00.0
/sys/kernel/iommu_groups/19/devices/0000:0b:00.0
/sys/kernel/iommu_groups/19/devices/0000:0a:00.0
/sys/kernel/iommu_groups/47/devices/0000:7f:13.4
/sys/kernel/iommu_groups/47/devices/0000:7f:13.0
/sys/kernel/iommu_groups/47/devices/0000:7f:13.5
/sys/kernel/iommu_groups/47/devices/0000:7f:13.1
/sys/kernel/iommu_groups/47/devices/0000:7f:13.6
/sys/kernel/iommu_groups/37/devices/0000:7f:08.0
/sys/kernel/iommu_groups/9/devices/0000:00:1a.0
/sys/kernel/iommu_groups/27/devices/0000:3f:0e.1
/sys/kernel/iommu_groups/27/devices/0000:3f:0e.0
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
This is how mine are set up (rombar off on all of the passthrough cards):
1574055175188.png


1574055224485.png
 

sammael

Explorer
Joined
May 15, 2017
Messages
76
I guess it varies from machine to machine, my freenas runs happily with these settings under proxmox with 3 pci cards passthrough'd:
freenas-proxmox.png

usually if I mess with any of the pcie or rombar options is when I get troubles.
 

pokworld

Cadet
Joined
Nov 17, 2019
Messages
5
Thank you for sharing your settings.
Sammael - you see all of your disks raw in FreeNAS?

Blueether, I see that you have virtio2 hdd. This is direct hdd sharing. Is it also passthrough? Do you see the disks through the controller or not?
 

sammael

Explorer
Joined
May 15, 2017
Messages
76
Yes, I actually blacklist the mpt3sas driver in /etc/modprobe.d so that proxmox doesn't ever probe or see the disks, they are only ever accessed by the freenas vm. I got 8 disks on one card, 4 on the other, all correctly detected by freenas, s.m.a.r.t. working etc. Is your hba card flashed to correct firmware? I know the guide I used required the card to be flashed in IT mode for freenas in vm.
 

pokworld

Cadet
Joined
Nov 17, 2019
Messages
5
Yes, it is IT mode, checked already. But I did not blacklisted anything. Missing really little thing :), may be I will play some more :)
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
Blueether, I see that you have virtio2 hdd. This is direct hdd sharing. Is it also passthrough? Do you see the disks through the controller or not?
The virtio2 hdd is for a ssd that I pass through to boot that vm off, all the pool hdds are on passed-through lsi cards. I don't think I had to blacklist any in proxmox.
It was the rombar that kept mine from booting with a m1015 card in it mode
 
Top