[SOLVED] Getting stuck at boot...

Status
Not open for further replies.

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
I'm getting this error when booting the FreeNAS 9.2.1.7 ISO. It's a VM and PCI passthrough is configured properly.
freenas9.2.1.7.png


Also the same error with the 9.2.1 ISO:
freenas9.2.1.png


But the boot finished with the 9.2.0 ISO. Curiously, one of the disks possibly has a corrupt or invalid GPT. Could this be the reason why 9.2.1+ fails to continue booting?
freenas9.2.0.png
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Those interrupt driven hooks are the tell-tale that PCI passthrough is not working with your hardware/driver versions. It could be your motherboard doesn't do PCIe passthrough properly, it could be the card you are passing through isn't compatible with PCIe passthrough, it could be the driver for the card. Corrupt GPTs are not the cause of your problems.

The one thing you *do* know is that it's not going to work with your current configuration. And considering you have a da113 that's a damn big server to turn around and virtualize. As organized crime used to say 100 years ago, "it would be a shame if something... happened.... to it". AKA do not virtualize.
 

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
That's a shame. :(

But why would it work with 9.2.0 and not 9.2.1+? Also, I tried booting with FreeBSD 9.2.0 ISO and it also works.

Update: I tried booting 9.2.1.7 from USB, bare metal, and I still get the same error. :confused:
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Why? Not a clue. This is why we tell people not to virtualize. It's not worth it the risks. As we keep telling people that argue for virtualizing, it works great until it doesn't. And when it doesn't you'll have a very bad day.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
It would be nice if you'd actually list your exact hardware so we aren't shooting in the dark. It's a VM means what? We'll assume esxi? Your other question mentions vmxnet3.

It kind of looks like a usb device issue. You bail out at a usb tablet...eh? Unplug it. Try disabling usb 3.0. What chipset? Lynx point? But you haven't given much info to go on.

A baremetal boot takes you to the land of the living, imho, and warrants a fair shot at resolution. But we have no chance at all with the given level of information. Read the forum rules (big red link at the top) as a start for what it might be useful to provide.
 
Last edited:

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
Sorry for the late reply and also for not including the hardware specs. Here they are:

NAS Head:
Motherboard: Supermicro X8DT6-F
CPU: 2x Intel Xeon E5606
RAM: 24GB DDR3 1333
Hard Drives: 24x 3TB Seagate Constellation ES.2
SAS: LSI 6Gbps SAS 2008 (flashed to IT)

2x JBOD:
Hard Drives: 45x 4TB WD Black

I was testing a whole lot yesterday and my findings aren't good. I was finally able to boot FreeNAS 9.2.1.7 by turning off the 2 JBODs. Then I attached disks up until the system fails to boot with the run_interrupt_driven_hooks error. And it was all working up until the 113th disk. It seems there's some error with the driver that fails when the attached disks are more than 112. Note that the SAS does support more than 112 disks as I'm able to boot the system with FreeNAS 9.2.0 and da114 appears (114 HDDs + 1 USB = 115). Attached is the dmesg of the booted up FreeNAS 9.2.0.

In conclusion, it's unfortunate that I have hold off upgrading to 9.2.1.7 until this problem is fixed.
 

Attachments

  • dmesg.txt
    46.2 KB · Views: 251

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
It seems you have enough detail to file a bug report at the very least. I suspect one of the devs may have some insight as to the differences, in the specific drivers for your card and expanders in 9.2.0 vs 9.2.1.7. Beyond looking at firmware vs. driver version... you need a proper bsd wiz.

With all your testing I'd be tempted to see if a 9.3 usb would boot and recognize all devices. If it does, you might get a hint at an upgrade path. Unfortunately your hardware and shear volume of devices put you in some rarefied air. Good luck.
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Am i reading it uncorrectly or are u using only 32gb Ram with more than 200TB raw disk space???

Edit: actually 400tb!!!
 

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
I do plan to upgrade the RAM eventually. And I also don't expect much performance from this setup as I'm only using this as backup at the moment.
 

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
Updated the firmware of the LSI SAS 2008 to version 19.00.00.00 (was 16.00.01.00 previously) and it now works.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Glad you got it working thanks for the update. It will be interesting to see if there are other side effects. Typically we've seen v19 on 9.2.1.7 (mismatch) cause the problems not solve them. But they obviously patched something along the way that helps your case.

I wouldn't sweat 24GB for a backup workload. It's not like you'll be hitting the ARC. Never know how an slog may help your writes though. That said another 64GB is gonna be a small percentage of that rig. ;) How did you carve up all that space?
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Wow, 432 (raw) TB if I did the math right.

Would love to see a zpool status on a box like that.
 

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
I'm using RAIDZ2, 2x 10 3TB disks + 9x 10 4TB disks. Usable space is ~254TB.

Here's a zpool status:

Code:
  
  pool: backup_pool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 20h23m with 0 errors on Sun Sep 14 20:24:16 2014
config:

	NAME                                            STATE     READ WRITE CKSUM
	backup_pool                                     ONLINE       0     0     0
	  raidz2-0                                      ONLINE       0     0     0
	    gptid/dfa4cd23-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e01de6bd-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e0943268-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e10d7216-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e1895cb5-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e2044def-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e27f96e5-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e2f9bf79-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e372f74e-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e3ecfbae-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-1                                      ONLINE       0     0     0
	    gptid/e473d07d-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e4f15154-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e56d7ed2-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e5eb3f37-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e66a852b-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e6eac20a-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e76ac8d0-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e7eac3ad-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e8689433-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/e8ea5100-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-2                                      ONLINE       0     0     0
	    gptid/e9893d16-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/ea09e8b0-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/ea8dbafd-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/eb0f932d-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/eb945081-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/ec1bfa78-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/eca09406-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/ed238a8e-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/edab032b-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/ee30aad1-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-3                                      ONLINE       0     0     0
	    gptid/eefd508a-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/ef860c70-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f00dbe48-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f0971b40-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f12115c5-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f1acd62d-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f239d45f-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f2c577b4-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f35184eb-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f3e0d816-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-4                                      ONLINE       0     0     0
	    gptid/f4e3a7b2-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f57073c7-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f60195f8-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f690e063-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f7243131-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f7b81a1f-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f84a989e-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f8de9517-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/f9723a09-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/fa06596d-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-5                                      ONLINE       0     0     0
	    gptid/fb52e1d1-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/fbe57f0f-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/fc7a50a6-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/fd0d59dd-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/fda1edd1-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/fe385585-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/fed1d9b8-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/ff6bee93-1ef3-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/00053dd2-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/009ee735-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-6                                      ONLINE       0     0     0
	    gptid/0240b46f-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/02da4a09-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/03748cd5-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/041199b5-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/04afdfad-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/054d6232-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/05ee4241-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0690fb10-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/073841ab-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/07dacfe7-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-7                                      ONLINE       0     0     0
	    gptid/09e01217-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0a82baae-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0b26ba5a-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0bcb48c9-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0c6ff6dd-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0d1719cb-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0dbe5fb1-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0e658bf4-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0f0e3db1-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/0fba2e1b-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-8                                      ONLINE       0     0     0
	    gptid/122f8e81-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/12d634f6-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/13851773-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1434bec3-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/14e4a6fd-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1591f210-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/16405312-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/16efdfb5-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/17a10249-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1853ef67-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-9                                      ONLINE       0     0     0
	    gptid/1b3ee453-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1be25bc6-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1c8d290f-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1d327ff5-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1dd71469-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1e7c0581-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1f259bdb-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/1fca9b5c-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/207249f3-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/211d44ab-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	  raidz2-10                                     ONLINE       0     0     0
	    gptid/2491b19e-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/25398156-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/25df8fed-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/268a7813-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/273832b8-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/27e418a6-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/289392dc-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/2946f5ac-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/29f70578-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0
	    gptid/2aa429e6-1ef4-11e4-81e1-002590920e18  ONLINE       0     0     0

errors: No known data errors

 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
if i'm not wrong
[(2x8x3TB) + (9x8x4TB)] x 0,9095 (tb -> tib) should be something around 305TiB... why are u missing almost 50TiB?
 

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
I actually don't know. :(

Here's the zpool list output:
Code:
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
backup_pool   381T  41.1T   340T    10%  1.00x  ONLINE  /mnt


Could it be used by the snapshots? I don't know if df includes it.
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
"zpool list" consider the raw space, with "zfs list" instead you will see the real used and available space
 

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
zfs list:
Code:
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
backup_pool                                    32.7T   253T   512K  /mnt/backup_pool
backup_pool/.system                            51.1M   253T   411K  /mnt/backup_pool/.system
backup_pool/.system/cores                       768K   253T   329K  /mnt/backup_pool/.system/cores
backup_pool/.system/rrd                         329K   253T   329K  /mnt/backup_pool/.system/rrd
backup_pool/.system/samba4                     19.7M   253T  1.31M  /mnt/backup_pool/.system/samba4
backup_pool/.system/syslog                     29.5M   253T  1.32M  /mnt/backup_pool/.system/syslog
backup_pool/geostorage                         23.6T   253T  14.0T  /mnt/backup_pool/geostorage
backup_pool/jails                              9.11T   253T  34.7M  /mnt/backup_pool/jails
backup_pool/jails/.warden-template-pluginjail   912M   253T   912M  /mnt/backup_pool/jails/.warden-template-pluginjail
backup_pool/jails/backup-pluginjail            9.11T   253T  5.72T  /mnt/backup_pool/jails/backup-pluginjail


df -h:
Code:
Filesystem                                       Size    Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs1a                              926M    703M    149M    82%    /
devfs                                            1.0k    1.0k      0B   100%    /dev
/dev/md0                                         4.6M    3.3M    870k    80%    /etc
/dev/md1                                         823k    2.0k    756k     0%    /mnt
/dev/md2                                         149M     52M     84M    38%    /var
/dev/ufs/FreeNASs4                                19M    4.7M     13M    25%    /data
backup_pool                                      252T    511k    252T     0%    /mnt/backup_pool
backup_pool/.system                              252T    411k    252T     0%    /mnt/backup_pool/.system
backup_pool/.system/cores                        252T    329k    252T     0%    /mnt/backup_pool/.system/cores
backup_pool/.system/rrd                          252T    329k    252T     0%    /mnt/backup_pool/.system/rrd
backup_pool/.system/samba4                       252T    1.3M    252T     0%    /mnt/backup_pool/.system/samba4
backup_pool/.system/syslog                       252T    1.3M    252T     0%    /mnt/backup_pool/.system/syslog
backup_pool/geostorage                           266T     14T    252T     5%    /mnt/backup_pool/geostorage
backup_pool/jails                                252T     34M    252T     0%    /mnt/backup_pool/jails
backup_pool/jails/.warden-template-pluginjail    252T    911M    252T     0%    /mnt/backup_pool/jails/.warden-template-pluginjail
backup_pool/jails/backup-pluginjail              258T    5.7T    252T     2%    /mnt/backup_pool/jails/backup-pluginjail
/dev/md3                                         1.9G    1.9M    1.7G     0%    /var/tmp/.cache
devfs                                            1.0k    1.0k      0B   100%    /mnt/backup_pool/jails/backup-pluginjail/dev
procfs                                           4.0k    4.0k      0B   100%    /mnt/backup_pool/jails/backup-pluginjail/proc


I also included df. I'm used to Size in df meaning total space. However, isn't it weird that Size = Avail for df?

For zfs list, does USED + AVAIL equal to total space?
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
I also included df. I'm used to Size in df meaning total space. However, isn't it weird that Size = Avail for df?

For zfs list, does USED + AVAIL equal to total space?

For what I know (what i understood here on the forums), df should NOT work good with Freenas and ZFS, instead i consider a good indication zfs list (in your case 286TiB) even if it's always a little bit less the actual disk space calculated (probably depends on snapshot, but not 100% sure about it).
But i guess other people much more expert than me can give you a more detailed answer
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Do you have lots of snapshots? Significant differences in USED compared to Referenced would seem to point to snapshots.

For CLI:
zfs list -t snapshot

or check in the webgui.
 

zang3tsu

Dabbler
Joined
Feb 17, 2013
Messages
14
I do have lots of snapshots as I snapshot hourly and keep it for 2 weeks, so that might be it. I can't give an exact list as the server's down at the moment because it's being relocated.
 
Status
Not open for further replies.
Top