run_interrupt_driven_hooks / problem with mrsas driver

jaydee99

Cadet
Joined
Jan 23, 2018
Messages
5
Hi,

I'm trying to install FreeNAS 11.1 on a ESXi 6.5 VM.

I want to PCI passthrough my RAID controller which is a Dell PERC H330 but i'm getting problem with that.

When i try to boot, i get stuck on :
run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config

Disabling hw.pci.enable_msi="0" and hw.pci.enable_msix="0" as suggested in another post works, but then my RAID controller is not working because mrsas won't get loaded...

Is there anything i can do to fix this please ??

EDIT : The problem is related to multiples vcpu and mrsas driver as stated in another post. Using only one vcpu solves the problem. I don't want to use old mfi driver and i'd like to use mrsas one, did you find a way to make it work with multiple vcpus please ?


Thanx a lot !
 
Last edited:
D

dlavigne

Guest
EDIT : The problem is related to multiples vcpu and mrsas driver as stated in another post. Using only one vcpu solves the problem. I don't want to use old mfi driver and i'd like to use mrsas one, did you find a way to make it work with multiple vcpus please ?

Looks like an upstream driver issue: https://redmine.ixsystems.com/issues/26733. I haven't checked the FreeBSD bugs database to see if anyone has reported there yet.
 

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
Sorry for the necro, but I'm running into exactly this. Is there any drawback to using the mfi driver? I'm on 11.2, a different version would work better?
 
Joined
Dec 29, 2014
Messages
1,135
I had the same issue with an LSI 9271 using the mrsas driver, and sometimes xpt_config would have to wait over 180 seconds before it would get past that. It did eventually, and everything was fine after that. The only exception was that JBOD drives attached to that controller were not discovered by smartd. They would work if I manually hacked the smartd.conf file. You have to repeat those changes after each reboot. Go in the other room and wait to see if gets past the xpt_config stage. Patience, grasshopper! :smile:
 

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
Hey Elliot!! :)
I shutdown the vm, decreased the vCPU to 1 as per the workaround in the bug. It boots now, but it's only 1 cpu. Workaround also says I can switch to the MFI drivers, but seems those are very old. Not sure what the ramifications of that driver are. Research isn't turning up much so far either. I'm seriously questioning my life choices right now...I would just buy a different HBA card, but if the Cisco OEM card "woke the dragon", I can't imagine any other card working. I still need to pickup the optane and that could cause only God knows what else.

Anyway, Thanks for the reply Elliot :)
 
Joined
Dec 29, 2014
Messages
1,135
I saw something somewhere (not very helpful, I know) about why the mrsas driver was better then mfi driver, but I forgot the details. I feel certain there is a reason since the out of the box tunables for FreeNAS prefer that one. All I can tell you is it worked for me if I just left it alone. It did piss me off to stare at it for 3+ minutes while it was booting, but that didn't accomplish much. I guess the question is if you can stand the fan noise where the box is placed, then go with the non-RAID HBA. If you can't, go with the current controller and try not watch during the boot process. :)
 
Last edited:

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
upload_2018-8-24_18-34-50.png


Alas, after an hour, it does indeed get past xpt_config, but then just loops the messages in the image. Any Hail Mary? :D
 
Joined
Dec 29, 2014
Messages
1,135
Hmm. Mine didn't do that. :-(

You are booting from something besides the HDD, right? If so, try disabling the BIOS in the RAID controller. You only need that enabled if you are booting from the drives attached to the RAID controller.
 
Joined
Dec 29, 2014
Messages
1,135
Here is another thought. Do you have a different RAID controller that wouldn't piss off the CIMC? Something like an LSI 9271, 9266, or even 9261? Mine was a 9271, and it got past the xpt_config notice after ~= 3 minutes with the BIOS disabled.
 

Ahira

Dabbler
Joined
Aug 22, 2018
Messages
11
I did disable all OptionROMs, is that what you're referring to? Reg the LSI cards, those are all SAS 6G right? Would that create a bottleneck or is that for way in the future, future-proofing?
 
Joined
Dec 29, 2014
Messages
1,135
Those controllers are all 6G. It depends on the drives as to whether that would be a bottleneck, but not working seems like a bigger bottleneck at the moment. I don't know if you would need different cables from the aforementioned RAID cards to the back plane or not. When I was researching the xpt_config problem I had, there were some suggestions of turning off IEEE 1394 (fire wire) and such, but that didn't seem applicable in that server board. I can't think of any other ideas at the moment, but I'll mull it as I head off to bed.
 

pmoneymoney

Cadet
Joined
Apr 15, 2019
Messages
1
edit /boot/loader.conf and add

hw.pci.honor_msi_blacklist=0

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203874
https://redmine.ixsystems.com/issues/26733

If anyone is still looking to resolve this issue, it's not an mrsas driver issue. It's apparently a blacklist that still exists in FreeBSD to disable MSI-X when it detects it's running in VMware due to some old limitation that used to exist in VMware. To find this work around I noticed the "mrsas0 MSI-x setup failed" in dmesg and how it would fall back to legacy interrupts. So I chased down MSI-x and VMware and came across the first link in bugs.freebsd.org. I believe this affects mrsas and all HBAs that use MSI-x.

I have confirmed this working on the following setup:
Dell R430
Perc H730 configured in HBA mode, cache disabled, passthrough via vmware
VMware vSphere 6.7
FreeNAS 11.2 U3 test VM with 10 vcpus and 16GB RAM.

To install FreeNAS first time either configure with 1 vcpu, then increase after installation and editing loader.conf, or detach the HBA, install FreeNAS and attach HBA after editing loader.conf post install.
 
Last edited:

danielrd6

Cadet
Joined
Aug 22, 2019
Messages
2
edit /boot/loader.conf and add

hw.pci.honor_msi_blacklist=0

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203874
https://redmine.ixsystems.com/issues/26733

If anyone is still looking to resolve this issue, it's not an mrsas driver issue. It's apparently a blacklist that still exists in FreeBSD to disable MSI-X when it detects it's running in VMware due to some old limitation that used to exist in VMware. To find this work around I noticed the "mrsas0 MSI-x setup failed" in dmesg and how it would fall back to legacy interrupts. So I chased down MSI-x and VMware and came across the first link in bugs.freebsd.org. I believe this affects mrsas and all HBAs that use MSI-x.

I have confirmed this working on the following setup:
Dell R430
Perc H730 configured in HBA mode, cache disabled, passthrough via vmware
VMware vSphere 6.7
FreeNAS 11.2 U3 test VM with 10 vcpus and 16GB RAM.

To install FreeNAS first time either configure with 1 vcpu, then increase after installation and editing loader.conf, or detach the HBA, install FreeNAS and attach HBA after editing loader.conf post install.
 

danielrd6

Cadet
Joined
Aug 22, 2019
Messages
2
Sorry for the last reply. I accidentally hit the reply button.

I just wanted to point out that changing /boot/loader.conf did not work, however adding the variable to the system tunables did work.
 
Top