Computer Crashed and Virtualized TrueNAS OS failed, unable to import pool on new install

Joined
Jul 25, 2022
Messages
4
Hello,

I appreciate any help anyone can provide. I'm running a VM of TrueNAS 12 Core on Windows Server - Hyper V. I recently installed new RAM which had some instability and my server rebooted overnight and the VM did not recover. I went back to an old backup of the VM and it was unable to import. I have also tried importing with a new installation of 13. The non-working VM is still saved at this point although it does not boot.

i9-9900k
ROG Maximus XI Hero
64GB non-ecc ram (32 avail for VM)
TrueNAS VM is a virtual disk on a solid state drive.
4x16TB drives, assigned to TrueNAS Core VM for pool use only - connected via SATA on motherboard. Hard drives are offline in Windows server and hardware passed through to VM.

At one point in the life of the VM, I had updated from previous physical hardware.

When attempting to import, it sees the pool but fails and states that one or more devices is currently unavailable.

Thank you for any advice you can provide.

Output from GUI import wizard:

Code:
Error importing pool
('one or more devices is currently unavailable',)

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 344, in import_pool
    self.logger.error(
  File "libzfs.pyx", line 392, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 338, in import_pool
    zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
  File "libzfs.pyx", line 1150, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1178, in libzfs.ZFS.__import_pool
libzfs.ZFSException: one or more devices is currently unavailable
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 355, in run
    await self.future
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 391, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1487, in import_pool
    await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1305, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1270, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1276, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1192, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1166, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('one or more devices is currently unavailable',)


Shell:

Code:
root@truenas[~]# zpool import
   pool: mainpool
     id: 8854095862719145977
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        mainpool                                        FAULTED  corrupted data
          raidz1-0                                      ONLINE
            gptid/c216311e-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c2370374-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c245a6d9-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c250a3c3-9e7d-11ec-a947-00155d001400  ONLINE

root@truenas[~]# zpool import -f mainpool
cannot import 'mainpool': one or more devices is currently unavailable

root@truenas[~]# zpool status -v
  pool: boot-pool
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors
root@truenas[~]# gpart list
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 134217687
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 272629760 (260M)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,f03366c2-f989-11ec-bc11-00155d001404,0x28,0x82000)
   rawuuid: f03366c2-f989-11ec-bc11-00155d001404
   rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
   label: (null)
   length: 272629760
   offset: 20480
   type: efi
   index: 1
   end: 532519
   start: 40
2. Name: da0p2
   Mediasize: 51254394880 (48G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(2,GPT,f03b13e4-f989-11ec-bc11-00155d001404,0x2082028,0x5f78000)
   rawuuid: f03b13e4-f989-11ec-bc11-00155d001404
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 51254394880
   offset: 17452519424
   type: freebsd-zfs
   index: 2
   end: 134193191
   start: 34086952
3. Name: da0p3
   Mediasize: 17179869184 (16G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(3,GPT,f0381543-f989-11ec-bc11-00155d001404,0x82028,0x2000000)
   rawuuid: f0381543-f989-11ec-bc11-00155d001404
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 17179869184
   offset: 272650240
   type: freebsd-swap
   index: 3
   end: 34086951
   start: 532520
Consumers:
1. Name: da0
   Mediasize: 68719476736 (64G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e4

Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 31251759063
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,c1e79e39-9e7d-11ec-a947-00155d001400,0x80,0x400000)
   rawuuid: c1e79e39-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da1p2
   Mediasize: 15998753091584 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,c2370374-9e7d-11ec-a947-00155d001400,0x400080,0x7467fff58)
   rawuuid: c2370374-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 15998753091584
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 31251759063
   start: 4194432
Consumers:
1. Name: da1
   Mediasize: 16000900661248 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

Geom name: da2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 31251759063
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,c1d7e31f-9e7d-11ec-a947-00155d001400,0x80,0x400000)
   rawuuid: c1d7e31f-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da2p2
   Mediasize: 15998753091584 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,c216311e-9e7d-11ec-a947-00155d001400,0x400080,0x7467fff58)
   rawuuid: c216311e-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 15998753091584
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 31251759063
   start: 4194432
Consumers:
1. Name: da2
   Mediasize: 16000900661248 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 31251759063
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da3p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,c1ec6d4f-9e7d-11ec-a947-00155d001400,0x80,0x400000)
   rawuuid: c1ec6d4f-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da3p2
   Mediasize: 15998753091584 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,c245a6d9-9e7d-11ec-a947-00155d001400,0x400080,0x7467fff58)
   rawuuid: c245a6d9-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 15998753091584
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 31251759063
   start: 4194432
Consumers:
1. Name: da3
   Mediasize: 16000900661248 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 31251759063
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da4p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,c1e94434-9e7d-11ec-a947-00155d001400,0x80,0x400000)
   rawuuid: c1e94434-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da4p2
   Mediasize: 15998753091584 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,c250a3c3-9e7d-11ec-a947-00155d001400,0x400080,0x7467fff58)
   rawuuid: c250a3c3-9e7d-11ec-a947-00155d001400
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 15998753091584
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 31251759063
   start: 4194432
Consumers:
1. Name: da4
   Mediasize: 16000900661248 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

root@truenas[~]# gpart show
=>       40  134217648  da0  GPT  (64G)
         40     532480    1  efi  (260M)
     532520   33554432    3  freebsd-swap  (16G)
   34086952  100106240    2  freebsd-zfs  (48G)
  134193192      24496       - free -  (12M)

=>         40  31251759024  da1  GPT  (15T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  31247564632    2  freebsd-zfs  (15T)

=>         40  31251759024  da2  GPT  (15T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  31247564632    2  freebsd-zfs  (15T)

=>         40  31251759024  da3  GPT  (15T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  31247564632    2  freebsd-zfs  (15T)

=>         40  31251759024  da4  GPT  (15T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  31247564632    2  freebsd-zfs  (15T)

root@truenas[~]# camcontrol devlist
<Msft Virtual Disk 1.0>            at scbus0 target 0 lun 0 (pass0,da0)
<WDC WD161KRYZ-01AGBB 01.0>        at scbus0 target 0 lun 1 (pass1,da1)
<WDC WD161KRYZ-01AGBB 01.0>        at scbus0 target 0 lun 2 (pass2,da2)
<WDC WD161KRYZ-01AGBB 01.0>        at scbus0 target 0 lun 3 (pass3,da3)
<WDC WD161KRYZ-01AGBB 01.0>        at scbus0 target 0 lun 4 (pass4,da4)
root@truenas[~]# glabel status
                                      Name  Status  Components
gptid/f03366c2-f989-11ec-bc11-00155d001404     N/A  da0p1
gptid/c2370374-9e7d-11ec-a947-00155d001400     N/A  da1p2
gptid/c216311e-9e7d-11ec-a947-00155d001400     N/A  da2p2
gptid/c245a6d9-9e7d-11ec-a947-00155d001400     N/A  da3p2
gptid/c250a3c3-9e7d-11ec-a947-00155d001400     N/A  da4p2
gptid/c1e94434-9e7d-11ec-a947-00155d001400     N/A  da4p1
gptid/c1ec6d4f-9e7d-11ec-a947-00155d001400     N/A  da3p1
gptid/c1d7e31f-9e7d-11ec-a947-00155d001400     N/A  da2p1
gptid/c1e79e39-9e7d-11ec-a947-00155d001400     N/A  da1p1
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hyper-V is not known to work well with TrueNAS. Please review the virtualization guide at


I wouldn't store valuable data on such a system.

It isn't clear what it's complaining about, but it seems likely that your next step might be to try forcing an import with the -f flag.
 
Joined
Jul 25, 2022
Messages
4
Hyper-V is not known to work well with TrueNAS. Please review the virtualization guide at


I wouldn't store valuable data on such a system.

It isn't clear what it's complaining about, but it seems likely that your next step might be to try forcing an import with the -f flag.

root@truenas[~]# zpool import -f mainpool
cannot import 'mainpool': one or more devices is currently unavailable

It seems like it's expecting different/more drives than the four it was setup with? I don't know if there's a way to see what it's expecting, or a way to modify. Would you suggest attempting to run TrueNAS as a baremetal now to see what it shows with the import?

I failed to see that Hyper-V was not the proper hypervisor to be using with TrueNAS, and that's definitely on me. I will work to move to baremetal either when I am able to access the pool again or when I find out that it's not going to be accessible. I'd obviously prefer the latter.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It seems like it's expecting different/more drives than the four it was setup with? I don't know if there's a way to see what it's expecting, or a way to modify. Would you suggest attempting to run TrueNAS as a baremetal now to see what it shows with the import?

Worth a try, but I suspect it won't work. I have fuzzy recollections of other Hyper-V systems corrupting pools in a similar way, but a quick search isn't leading me to any other threads right now.

The worrying bit here is the "corrupted data" in the pool view. ZFS has no tools like fsck or chkdsk, because think of how much stuff if would need to be able to keep track of all that. ZFS is reliant on being able to maintain the pool integrity, once you lose integrity, you may lose the pool.

I don't have the time right now to dig deeper, but I do encourage you to keep researching. It's possible that you might be able to do a "zpool clear mainpool" or something like that if this is just ZFS remembering that there was a major fault on the pool.


Either way, the point here is that the Hyper-V I/O stuff doesn't seem to be sufficiently up to snuff to avoid I/O errors and/or coping with timeouts or other issues. I do not have a Hyper-V system and am guessing at this based on the issues I've seen reported by others. Refunds for this advice is limited to the amount you paid for it. ;-)
 
Joined
Jul 25, 2022
Messages
4
I don't have the time right now to dig deeper, but I do encourage you to keep researching. It's possible that you might be able to do a "zpool clear mainpool" or something like that if this is just ZFS remembering that there was a major fault on the pool.
Unfortunately doesn't work, I believe that command only works when the pool is active:

Code:
root@truenas[~]# zpool clear mainpool
cannot open 'mainpool': no such pool


Try a capital F: zpool import -F mainpool.

Code:
root@truenas[~]# zpool import -F mainpool
cannot import 'mainpool': pool was previously in use from another system.
Last accessed by  (hostid=c7ac94b) at Wed Dec 31 16:00:00 1969
The pool can be imported, use 'zpool import -f' to import the pool.
root@truenas[~]# zpool import -Ff mainpool
cannot import 'mainpool': one or more devices is currently unavailable
root@truenas[~]# zpool import -Ffn mainpool
root@truenas[~]#
root@truenas[~]#echo $?
1


I just wish I could get more clarification on what devices it thinks are missing. This output makes it seem like the drives it needs are available.


Code:
root@truenas[~]# zpool import
   pool: mainpool
     id: 8854095862719145977
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:


        mainpool                                        FAULTED  corrupted data
          raidz1-0                                      ONLINE
            gptid/c216311e-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c2370374-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c245a6d9-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c250a3c3-9e7d-11ec-a947-00155d001400  ONLINE



If I shut down the OS and disconnect a drive, it properly indicates that drive is missing:

Code:
root@truenas[~]# zpool import
   pool: mainpool
     id: 8854095862719145977
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:


        mainpool                                        FAULTED  corrupted data
          raidz1-0                                      DEGRADED
            gptid/c216311e-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c2370374-9e7d-11ec-a947-00155d001400  ONLINE
            gptid/c245a6d9-9e7d-11ec-a947-00155d001400  UNAVAIL  cannot open
            gptid/c250a3c3-9e7d-11ec-a947-00155d001400  ONLINE
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Posts like these is what made me thankful I went baremetal. I lost power at least 10 times due to little kids (nephews) pressing/pulling anything that is in their view and I have never once failed to import the pool. I wish you luck in recovering your data.
 
Joined
Jul 25, 2022
Messages
4
Posts like these is what made me thankful I went baremetal. I lost power at least 10 times due to little kids (nephews) pressing/pulling anything that is in their view and I have never once failed to import the pool. I wish you luck in recovering your data.
Lol I'm glad to help make you feel good about yourself. :D yeah, I shouldn't have done it this way. Definitely my mistake that I'll be remedying, with or without my data.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Lol I'm glad to help make you feel good about yourself. :D yeah, I shouldn't have done it this way. Definitely my mistake that I'll be remedying, with or without my data.
Well, it helps that I also run enterprise hardware. Whether that has any effect on reliability or not, I have no idea... but it does give me a peace of mind psychologically I suppose.

Good luck. I hope you find more knowledgeable people who have more experience than the n00bs here like me. I do hope you will be able to recover it cause losing 8 TB of data sucks (My pool is similar size).
 
Top