gptzfsboot: No ZFS pools located, can't boot

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
So while being between UPS (my previous one died and the next is in the mail), I lost power.

I go to start my box up today after it got killed with no warning the previous day/night and I'm greeted with the error in the title.

Code:
gptzfsboot: error 32 lba 1
gptzfsboot: error 32 lba 1
gptzfsboot: No ZFS pools located, can't boot


That's the entirety of the error message I receive.

I'd love any suggestions on how to proceed in getting my box back in working condition without losing all my FreeNAS settings.
 
D

dlavigne

Guest
Looks like the boot device died. Install the same version to a new stick and upload your previously saved config.
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
I have mirrored flash drives, it feels unlikely that they'd both die simultaneously.

Also I don't currently have a saved config, is it possible to recover my config off one of the flash drives (assuming my gut is right and one is still partially alive).
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
Nevermind, I reinstalled FreeNAS to some internal SSDs that I'd been meaning to move to and managed to recover my config off the USB drives.

For future reference (if anyone searches and needs this info) I used these threads:
1) Install FreeNAS on a new flash. You need the SAME revision, etc., you were running. This avoids "upgrade" complications.

2) Boot into it and don't do any configuration.

3) On the console, select the option to go to shell.

4) Insert your old USB key in a second USB port.

You will probably get a kernel message indicating that a new device has been added, and it'll probably tell you that it is "da1" or something like that. You need that information. If it doesn't, try running "camcontrol devlist" to see if two USB keys are shown. The first one will be the running OS.

So now here's the tricky part. The configuration is stored on slice 4. So if you have "da1", then it is on "da1s4".

5) Make a temporary mountpoint: "mkdir /var/tmp/cfgmount"

6) Mount it. "mount -o ro /dev/da1s4 /var/tmp/cfgmount"

7) Now if you're feeling daring, what you can do next is to just do "cp /var/tmp/cfgmount/freenas_v1.db /data/freenas_v1.db" and then reboot. Remember to pull the old USB key to eliminate boot order confusion.
If it's a 9.3 FreeNAS drive then it's a ZFS volume so you should import it. Personally I wouldn't use the GUI for that, but the CLI with zpool import:
  • Use zpool import to see the ID of the pool (you don't want to use the name to mount the pool as it's the same name than the current system pool)
  • Use zpool import -R /mnt the_ID_of_the_pool new_pool_name to import the pool which should be moutned as /mnt/new_pool_name
In short use the following command zpool import -f -R /mnt 12955299643412836894 oldboot
Well, it really depends on how read-only you want the pool to be. And no, that's not a joke.

First, a bit of terminology: in ZFS, you import a pool, and optionally mount the (any) file systems within it. You can import a pool without mounting any file systems by passing -N to zpool importand then later on mount any desired file systems using zfs mount. (This is a perfectly valid scenario if, for example, you want to access only a single file system out of many, or if you want to do something resembling an off-line scrub of the pool.)

ZFS isn't a big fan of truly read-only access. For example, if ZFS detects an error that it is able to repair, I believe it will repair the error and write the repaired data to disk even if you imported the pool as read-only. My understanding is that, in ZFS parlace, "read-only" applies only to the user-visible state of the pool and its datasets. If, on the other hand, you make a binary copy of the disk to a file (or set of files), make those files truly read only, and try to import the pool from there, ZFS won't be able to import the pool at all no matter how hard you try. If you make the files writable, it will work fine. (I actually tried this just a few weeks ago, albeit using a zvol, and ZFS vehemently refused to import the pool. When I set the zvol to read/write instead of read-only, the pool imported fine.) Other file systems like (on Linux) ext4 and probably others handle this situation somewhat gracefully, but ZFS balks.

If you are unlucky, and don't have ECC RAM installed in the system where you are importing the pool, then ZFS' attempting to correct any errors it encounters might actually make things worse, although opinions differ on whether this is actually a real risk in practice. Personally I am of the opinion that any data I care enough about to protect with ZFS and snapshots and storage-level redundancy and backups and whatnot deserves the protection offered by ECC RAM also, but many PCs don't have ECC RAM.

So, you can import the pool in read-only mode, with a specific alternate root to keep it from stepping on anything else's toes, but you need to be aware that it isn't necessarily truly read-only in a forensic sense. (It will, however, ensure that you don't accidentally change anything in the pool.) To do a read-only import, assuming that the pool is named tank and that the device node(s) is/are available in /dev, you would use a command like:

# zpool import tank -d /dev -o readonly=on -R /mnt/someplace

This will look in /dev for anything holding a ZFS pool with the name tank, import it, temporarily setting the pool property readonly to on (which means that all user-initiated writes will be rejected) and temporarily setting its altroot property to /mnt/someplace. (These property values are "temporary" in the sense that they are not persisted to the disk(s) as current property values, so if you export and re-import the pool without them, the values will be back to normal. They might possibly be written to the pool history though, which once the pool is imported you can look at with zpool history tank if you are so inclined.) Once the pool is imported, you will see your files under /mnt/someplace and have normal, read-only access to them, including any snapshots that are already made on the datasets in the pool.

Given your example, I suspect that you would use something along the lines of:

# zpool import zroot02 -d /dev -o readonly=on -R /mnt/my-fun-mountpoint

When you are done, remember to cleanly export the pool:

# zpool export tank

or perhaps

# zpool export zroot02

That will unmount all file systems and other datasets within the pool, flush all buffers (to the extent that any need flushing in the first place), mark the pool as not imported on all constituent devices, and perform any other necessary housekeeping tasks to ensure that the pool can safely be moved to a different system and imported there later.
copy the latest (or the last known valid) version of the saved DB config back to '/data/freenas-v1.db' and reboot.
 

McKoene

Cadet
Joined
May 12, 2014
Messages
3
Thanks for the infos, as I am getting the same error.
Upgrade to 11.2 went fine, moved some jails to iocage and after a few weeks the first reboot failes, as the bootloader had been updated from grub to gptzfsboot. Did a bios update, but that didn't help. A fresh install fails with the same error

Code:
gptzfsboot: error 32 lba 1


I'll order some small SSD and try that as a solution.

Update: Changed the Bootdrive to a SSD today, and the error does not reappear. Everything is running fine now.
 
Last edited:
Joined
Jul 19, 2016
Messages
72
This happened to me now. Not sure why. I was running the latest 11.2 U1. But my only backup is 11.1 U5 . What shall I do?
Shall I install the 11.1 U7 to a new stick and then import or what?

Edit:
I put the usb into my laptop and used w32 disk imager, and read out of it. and it created a iso that was about 32gb. The weird part is that when having it in, it shows as 2 disks both at that size.

Edit2:
Puh it seems its saved. The USB stick itself (Sandisk Ultra) somehow had its install corrupted. Scanning the stick showed no erros what so ever on it. And I did a total wipe of it, then installing 11.1 U7. Then using my backup from summer 2018. And It did it. I have no access to all files again. And I can now update the system back to 11.2 U1.

So I will go an get another USB stick and clone it as a backup as well. Then I wont have to use this much time getting things back up if it happens again.
 
Last edited:

Stahl

Cadet
Joined
Mar 15, 2019
Messages
2
This happened to me now. Not sure why. I was running the latest 11.2 U1. But my only backup is 11.1 U5 . What shall I do?
Shall I install the 11.1 U7 to a new stick and then import or what?

Edit:
I put the usb into my laptop and used w32 disk imager, and read out of it. and it created a iso that was about 32gb. The weird part is that when having it in, it shows as 2 disks both at that size.

Edit2:
Puh it seems its saved. The USB stick itself (Sandisk Ultra) somehow had its install corrupted. Scanning the stick showed no erros what so ever on it. And I did a total wipe of it, then installing 11.1 U7. Then using my backup from summer 2018. And It did it. I have no access to all files again. And I can now update the system back to 11.2 U1.

So I will go an get another USB stick and clone it as a backup as well. Then I won't have to use this much time getting things back up if it happens again.

I still have the same problem with 11.2. There is no problem with installing 11.1 tho. I did some research on this "gptzfsboot error 32 lba 1" error. It seems like 11.2 has a problem with usb 3.0, its some sort of a bug. If you install the old version and do a manual install to 11.2 with the .tar file you can update your NAS to the latest version BUT if you shutdown or restart your NAS, the same problem will occur again. Thats really weird. Did you try to restart your nas with 11.2 ? it would be really interesting to see if you still have that gptboot error. But pls do a backup before trying :D
 

Bill_Lyddon

Cadet
Joined
Apr 8, 2019
Messages
4
I got the exact error messages listed at the start of this thread. My mainboard supports USB 2.0 but I was using USB 3.0 flash for the boot drive. The old bootloader (grub) allowed me to get away with this. The new one does not. I replaced my boot media with USB 2.0 flash and the problem disappeared.
 
Top