SOLVED can't import ZFS volume (nvme-device)

Status
Not open for further replies.

airflow

Contributor
Joined
May 29, 2014
Messages
111
Hi,

I have the problem that since after the recent stable update one of my pools couldn't be properly imported any more. Hardware is ASRock E3C226D2I, Intel Core i3-4130T, 2x 2.90GHz, Kingston ValueRAM DIMM Kit 16GB (ECC capable), Intel P3600 as the drive in question and some other harddrives which are not relevant.

The GUI showed in the volume manger "couldn't determine space (0)" (or similar) for the volume in question. I tried to "import volume" in the GUI, which didn't offer any volumes to import in the drop-down-list. "zpool import" on CLI just output nothing. I then tried to detach the volume in the GUI, which worked (without wiping disks of course). My idea was to then again try import the volume via CLI, to find out what is going wrong here. But still - "zpool import" just outputs nothing.

I have to say I don't really know how to troubleshoot this correctly. I read the documentation regarding importing volumes, but that didn't really help. The disk is correctly initalized when booting, dmesg shows it normally. The volume manager offers me to create a new volume with this disk, but I don't want to do this, as I prefer to import the existing pool of this disk.

I have an idea how I triggered this. I hammered the drive before the reboot with lots of write/read tests. After that I deleted 200GB of testdata. FreeNAS shows strange behaviour in combination with this drive regarding trim, which is exactly what happens when you delete loads of data. I guess I rebooted too early after this, when trimming was still in progress... My precious data is on another, redundant store. Still I'd like to learn from this experience - which options do I have here? Why is import failing?

Thanks for hints!
airflow
 

airflow

Contributor
Joined
May 29, 2014
Messages
111
Do I understand correctly that this command should show me the current partition table of the drive:

Code:
[root@fractal] /mnt/DATA/shares/persistent# gpart show nvd0
gpart: No such geom: nvd0.


Does this mean that there seems to be no partition table on this drive?

Another snippet:

Code:
[root@fractal] /mnt/DATA/shares/persistent# camcontrol devlist
<WDC WD30EFRX-68EUZN0 80.00A80>    at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD30EFRX-68EUZN0 80.00A80>    at scbus1 target 0 lun 0 (pass1,ada1)
<WDC WD30EFRX-68EUZN0 82.00A82>    at scbus2 target 0 lun 0 (pass2,ada2)
<WDC WD30EFRX-68EUZN0 80.00A80>    at scbus3 target 0 lun 0 (pass3,ada3)
<WDC WD30EFRX-68EUZN0 80.00A80>    at scbus4 target 0 lun 0 (pass4,ada4)
<WDC WD30EFRX-68EUZN0 82.00A82>    at scbus5 target 0 lun 0 (pass5,ada5)
< Patriot Memory PMAP>             at scbus7 target 0 lun 0 (pass6,da0)
< Patriot Memory PMAP>             at scbus8 target 0 lun 0 (pass7,da1)


The device is not listed here. I'm not sure if it should be listed here. It is initalized normally during boot (visible in dmesg) and the volume manger of FreeNAS offers to build a new volume with the device.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, I think I know why the zpool won't mount... your slog is missing. You can probably mount the zpool from the CLI with the -m parameter, remove the slog, then reboot and the zpool should mount properly and automatically.

As for why your nvme disk suddenly did what it did, I have no idea.
 

airflow

Contributor
Joined
May 29, 2014
Messages
111
Well, I think I know why the zpool won't mount... your slog is missing. You can probably mount the zpool from the CLI with the -m parameter, remove the slog, then reboot and the zpool should mount properly and automatically.

Hmm, I just tried that. The command "zpool import -m" does not give any output, neither. It's not necessary to explicitly name the device?

Do you know if the command "gpart show nvd0" should give me some info about partitioning on the drive?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You do need to specify the zpool name, and the mountpoint.

Something like...


zpool import -m -f tank -R /mnt
 

airflow

Contributor
Joined
May 29, 2014
Messages
111
I tried zpool import in different variants - the answer is always the same: "cannot import 'SSD': no such pool available". So it seems the system can't find any device which contains a pool with that name. As I don't have to name the device for the import-command, I assume that it checks every available device if it has any ZFS/pool on it, right? Is there a way to explicitly check a specific device? My only idea is to use"gpart show nvd0", which I assume should tell me the partitioning of the device, but it returns nothing (gpart: No such geom: nvd0).
 

airflow

Contributor
Joined
May 29, 2014
Messages
111
For some reason I'm still assuming it has to do with partitioning.

If I look at one of the disks of a working pool, I see an output like this:
Code:
[root@fractal] ~# gpart show ada0
=>        34  5860533101  ada0  GPT  (2.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  5856338696     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)


which tells me that FreeNAS uses partitioning on the disks (I just read that you can use ZFS directly on the disks without partitioning). With the disk in question, "gpart show" just throws "No such geom: nvd0". When I look directly on the disk (for example with "cat /dev/nvd0"), I see there's data on it. So my assumption is that for some reason the partitioning is not valid/working here, which might be the reason why the import command fails. Would it be an option to recreate partioning (the same way FreeNAS would do it) and look if the import command can find the pool then?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can't use ZFS directly on the disks without partitioning. Well, you can, but then you undercut the system's ability to identify drives, and it'll make a royal mess, and FreeNAS isn't set up to do that, so for all sane intents and purposes, you can't do that.

Twiddling around with the disk's partition table by hand is certainly possible but carries some risk. Possibly something along the lines of

# gpart create -s GPT nvd0
# gpart add -b 128 -s 4194304 -t freebsd-swap nvd0
# gpart add -b 4194432 -t freebsd-zfs nvd0

You'd probably also want to back up the existing contents of nvd0 prior to attempting such hackery, or maybe use the pending commit flag to experiment without actually modifying.
 

airflow

Contributor
Joined
May 29, 2014
Messages
111
# gpart create -s GPT nvd0
# gpart add -b 128 -s 4194304 -t freebsd-swap nvd0
# gpart add -b 4194432 -t freebsd-zfs nvd0
I recreated the partition-table with these commands and voila, the pool can be imported without any problems! :) Everythings there - thanks jgreco for your competent contribution. I have to say I'm pleased that I had the right sense about the root-cause of the problem right from the beginning, it just took somebody to actually read and understand and respond to it... Thanks, you saved me some time. I'll set this to solved - have a good night!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I would suggest you immediately backup all the data, then wipe out the device and start fresh.
 
Status
Not open for further replies.
Top