Unable to decrypt encrypted pools after reboot (GPT partitions gone on ALL drives?)

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
This is a follow-up to my original post this past Saturday. I am starting this new thread because I have learned a lot of new information over the past few days and have made some progress but have hit a wall.

My hardware setup:

Dell r720xd​
128MB ecc memory​
SAS9200-8e-hp JBOD controller​
SAS9207-8i JBOD controller​

My pool setup:

tenTB.spinners (12 10TB drives)... 2x 6-drive raid-z2 striped together | encrypted + password​
eightTB.spinners (3 8TB drives)... 1x 3-drive raid-z | encrypted + password​
fourTB.spinners... a work in progress when things went wrong | NOT encrypted​

What happened:

I extended tenTB.spinners a few weeks ago and had plenty of space. I thought I would convert the seven drives of fourTB.spinners to a 3x mirrored vdev setup to see if there would be a performance improvement. I should mention that I was having a little difficulty because although all reported the same size in webGUI, but running geom disk list showed one was actually 4000785948160 bytes while the others were 4000787030016 bytes. There were a lot of webGUI mouse clicks on this pool first trying to start with the smaller & then adding one of the others (would not allow). It would allow 3x 2-drive mirrors striped together with the smaller drive as a spare. I didn't think rebuild would work so I think this is when I think I shutdown the machine, pulled a drive, and then started things up (but honestly memory a little cloudy here... but I DID reboot the machine).​
After I was back up I pretty quickly noticed that tenTB.spinners & eightTB.spinners were locked. This was expected and I clicked the lock button like I have done many times before so I could enter my password. This time unfortunately it threw the following error:​
Code:
Error: concurrent.futures.process._RemoteTraceback:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 95, in main_worker
    res = loop.run_until_complete(coro)
  File "/usr/local/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 51, in _run
    return await self._call(name, serviceobj, methodobj, params=args, job=job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 43, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 43, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 964, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 382, in import_pool
    zfs.import_pool(found, found.name, options, any_host=any_host)
  File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 380, in import_pool
    raise CallError(f'Pool {name_or_guid} not found.', errno.ENOENT)
middlewared.service_exception.CallError: [ENOENT] Pool 49485231544439643 not found.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1656, in unlock
    'cachefile': ZPOOL_CACHE_FILE,
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1127, in call
    app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1074, in _call
    return await self._call_worker(name, *args)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1094, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1029, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1003, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [ENOENT] Pool 49485231544439643 not found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 349, in run
    await self.future
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 386, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 960, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1668, in unlock
    raise CallError(msg)
middlewared.service_exception.CallError: [EFAULT] Pool could not be imported: 3 devices failed to decrypt.

I should mention that tenTB.spinners & eightTB.spinners were still showing up in the webGUI.​
I did not accidentally wipe the drives:​
2020.02.08.at.12.14.33.ScreenShot.from.RYZEN-2700X.png
After trying to decrypt several times I posted under another thread with a similar problem. I want to thank the community (especially @PhiloEpisteme) but after trying those suggestions I was still stuck and started doing some additional online research & testing.​

The geli attach command:

Whenever I tried this I would get a "Cannot read metadata" error:​

Code:
root@freenas[~/downloads]# geli attach -k pool_eightTB.spinners_encryption.key -j pool_eightTB.spinners_encryption.pw /dev/da0
geli: Cannot read metadata from /dev/da0: Invalid argument.

I this point I decided to build a fresh 3-drive raid-z pool using three drives from the old/unused fourTB.spinners pool. After a couple of attempts I WAS able to decrypt the drives individually:​
Code:
root@freenas[~/downloads]# ls -al /dev/da*
crw-r-----  1 root  operator  0x9e Feb 13 23:02 /dev/da0
crw-r-----  1 root  operator  0x9f Feb 13 23:02 /dev/da0p1
crw-r-----  1 root  operator  0xa0 Feb 13 23:02 /dev/da0p2
crw-r-----  1 root  operator  0xa1 Feb 13 23:02 /dev/da1
crw-r-----  1 root  operator  0xb0 Feb 13 23:02 /dev/da1p1
crw-r-----  1 root  operator  0xb1 Feb 13 23:02 /dev/da1p2
crw-r-----  1 root  operator  0xa2 Feb 13 23:02 /dev/da2
crw-r-----  1 root  operator  0xb2 Feb 13 23:02 /dev/da2p1
crw-r-----  1 root  operator  0xb3 Feb 13 23:02 /dev/da2p2
crw-r-----  1 root  operator  0xa3 Feb 13 23:02 /dev/da3
crw-r-----  1 root  operator  0xb4 Feb 13 23:02 /dev/da3p1
crw-r-----  1 root  operator  0xb5 Feb 13 23:02 /dev/da3p2
crw-r-----  1 root  operator  0xad Feb 13 23:02 /dev/da4
crw-r-----  1 root  operator  0xb6 Feb 13 23:02 /dev/da4p1
crw-r-----  1 root  operator  0xb7 Feb 13 23:02 /dev/da4p2

Code:
root@freenas[~/downloads]# geli attach -k pool_fourTBs_encryption.key -j pool_fourTBs_encryption.pw /dev/da0
geli: Cannot read metadata from /dev/da0: Invalid argument.
root@freenas[~/downloads]# geli attach -k pool_fourTBs_encryption.key -j pool_fourTBs_encryption.pw /dev/da0p1
geli: Cannot read metadata from /dev/da0p1: Invalid argument.
root@freenas[~/downloads]# geli attach -k pool_fourTBs_encryption.key -j pool_fourTBs_encryption.pw /dev/da0p2
root@freenas[~/downloads]# geli attach -k pool_fourTBs_encryption.key -j pool_fourTBs_encryption.pw /dev/da1p2
root@freenas[~/downloads]# geli attach -k pool_fourTBs_encryption.key -j pool_fourTBs_encryption.pw /dev/da2p2

* the pool_fourTBs_encryption.pw is a single-line text file with the password​
* after decrypting drives I was able to import pool in webGUI by Pool -> Add -> Import -> "No" to decrypt (select unencrypted) -> select pool in dropdown​
* Note that geli will not decrypt with /dev/da0 or /dev/da0p1 but it WILL decrypt /dev/da0p2*​
*** The keys is the "p2" at the end ***

Now I am pretty sure my problem can be summarized with one command:
In this case (/dev/da0, (/dev/da1 & (/dev/da2 are the three 8TB drives that make up eightTB.spinners
Code:
root@freenas[~]# ls -al /dev/da*
crw-r-----  1 root  operator  0x9e Feb 13 15:29 /dev/da0
crw-r-----  1 root  operator  0x9f Feb 13 15:29 /dev/da1
crw-r-----  1 root  operator  0xa0 Feb 13 15:29 /dev/da2
crw-r-----  1 root  operator  0xa1 Feb 13 15:29 /dev/da3
crw-r-----  1 root  operator  0xa3 Feb 13 15:29 /dev/da3p1
crw-r-----  1 root  operator  0xa4 Feb 13 15:29 /dev/da3p2
crw-r-----  1 root  operator  0xa2 Feb 13 15:29 /dev/da4
crw-r-----  1 root  operator  0xae Feb 13 15:29 /dev/da4p1
crw-r-----  1 root  operator  0xaf Feb 13 15:29 /dev/da4p2

Where are my *p1 & *p2 partitions??
The 3 drives that make up eightTB.spinners and the 12 drives that make up tenTB.spinners are missing these partitions. What the heck!!​
I searched/learned a little about GPT partitions and there is a command to recover lost partitions: gpart recover
Unfortunately I either don't know how to use it correctly or it will not work for me:​
Code:
root@freenas[~]# gpart recover /dev/da0
gpart: arg0 'da0': Invalid argument

So this is where I am right now. There are some risky ideas on FreeBSD forums using the dd command but I thought I would reach out and see if anybody had any other ideas.

I will post some additional commands I ran in on subject system in a follow-up post.
 
Last edited:

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
Code:
root@freenas[~]# zpool status -v
  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da3p2   ONLINE       0     0     0
            da4p2   ONLINE       0     0     0

errors: No known data errors

Code:
root@freenas[~]# zpool import
root@freenas[~]# zpool import -a

Code:
root@freenas[~]# camcontrol devlist
<ATA WDC WD80EFZX-68U 0A83>        at scbus0 target 14 lun 0 (pass0,da0)
<ATA WDC WD80EFZX-68U 0A83>        at scbus0 target 15 lun 0 (pass1,da1)
<ATA WDC WD80EFZX-68U 0A83>        at scbus0 target 16 lun 0 (pass2,da2)
<SanDisk Cruzer Spark 1.00>        at scbus2 target 0 lun 0 (pass3,da3)
<SanDisk Cruzer Spark 1.00>        at scbus3 target 0 lun 0 (pass4,da4)

Code:
root@freenas[~]# glabel status
                                      Name  Status  Components
gptid/a65ac8bc-4eb6-11ea-95d9-246e966daee0     N/A  da3p1
gptid/a71d065c-4eb6-11ea-95d9-246e966daee0     N/A  da4p1

Code:
root@freenas[~]# gpart list
Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 60088279
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da3p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   efimedia: HD(1,GPT,a65ac8bc-4eb6-11ea-95d9-246e966daee0,0x28,0x400)
   rawuuid: a65ac8bc-4eb6-11ea-95d9-246e966daee0
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: da3p2
   Mediasize: 30752636928 (29G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 544768
   Mode: r1w1e1
   efimedia: HD(2,GPT,a6a4f0d8-4eb6-11ea-95d9-246e966daee0,0x428,0x3948000)
   rawuuid: a6a4f0d8-4eb6-11ea-95d9-246e966daee0
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 30752636928
   offset: 544768
   type: freebsd-zfs
   index: 2
   end: 60064807
   start: 1064
Consumers:
1. Name: da3
   Mediasize: 30765219840 (29G)
   Sectorsize: 512
   Mode: r1w1e2

Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 60088279
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da4p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   efimedia: HD(1,GPT,a71d065c-4eb6-11ea-95d9-246e966daee0,0x28,0x400)
   rawuuid: a71d065c-4eb6-11ea-95d9-246e966daee0
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: da4p2
   Mediasize: 30752636928 (29G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 544768
   Mode: r1w1e1
   efimedia: HD(2,GPT,a7661cf2-4eb6-11ea-95d9-246e966daee0,0x428,0x3948000)
   rawuuid: a7661cf2-4eb6-11ea-95d9-246e966daee0
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 30752636928
   offset: 544768
   type: freebsd-zfs
   index: 2
   end: 60064807
   start: 1064
Consumers:
1. Name: da4
   Mediasize: 30765219840 (29G)
   Sectorsize: 512
   Mode: r1w1e2

Code:
root@freenas[~]# gpart show
=>      40  60088240  da3  GPT  (29G)
        40      1024    1  freebsd-boot  (512K)
      1064  60063744    2  freebsd-zfs  (29G)
  60064808     23472       - free -  (11M)

=>      40  60088240  da4  GPT  (29G)
        40      1024    1  freebsd-boot  (512K)
      1064  60063744    2  freebsd-zfs  (29G)
  60064808     23472       - free -  (11M)

Code:
root@freenas[~]# ls -al /dev/da*
crw-r-----  1 root  operator  0x9e Feb 13 15:29 /dev/da0
crw-r-----  1 root  operator  0x9f Feb 13 15:29 /dev/da1
crw-r-----  1 root  operator  0xa0 Feb 13 15:29 /dev/da2
crw-r-----  1 root  operator  0xa1 Feb 13 15:29 /dev/da3
crw-r-----  1 root  operator  0xa3 Feb 13 15:29 /dev/da3p1
crw-r-----  1 root  operator  0xa4 Feb 13 15:29 /dev/da3p2
crw-r-----  1 root  operator  0xa2 Feb 13 15:29 /dev/da4
crw-r-----  1 root  operator  0xae Feb 13 15:29 /dev/da4p1
crw-r-----  1 root  operator  0xaf Feb 13 15:29 /dev/da4p2

Code:
root@freenas[~]# gpart show /dev/da0
gpart: No such geom: /dev/da0.
root@freenas[~]# gpart show /dev/da1
gpart: No such geom: /dev/da1.
root@freenas[~]# gpart show /dev/da2
gpart: No such geom: /dev/da2.
root@freenas[~]# gpart show /dev/da3
=>      40  60088240  da3  GPT  (29G)
        40      1024    1  freebsd-boot  (512K)
      1064  60063744    2  freebsd-zfs  (29G)
  60064808     23472       - free -  (11M)

root@freenas[~]# gpart show /dev/da4
=>      40  60088240  da4  GPT  (29G)
        40      1024    1  freebsd-boot  (512K)
      1064  60063744    2  freebsd-zfs  (29G)
  60064808     23472       - free -  (11M)

Code:
root@freenas[~]# egrep 'da[0-9]|cd[0-9]' /var/run/dmesg.boot
FreeBSD 11.3-RELEASE-p5 #0 r325575+8ed1cd24b60(HEAD): Mon Jan 27 18:07:23 UTC 2020
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 3.2.12-k> port 0xfcc0-0xfcdf mem 0xdcd00000-0xdcdfffff,0xdcff8000-0xdcffbfff irq 36 at device 0.0 on pci1
da0 at mps0 bus 0 scbus0 target 14 lun 0
da1 at mps0 bus 0 scbus0 target 15 lun 0
da2 at mps0 bus 0 scbus0 target 16 lun 0
da0: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da0: Serial Number R6GNK36Y
da0: 600.000MB/s transfers
da0: Command Queueing enabled
da0: 7630885MB (15628053168 512 byte sectors)
da2: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da2: Serial Number VKK71Z6Y
da3 at umass-sim0 bus 0 scbus2 target 0 lun 0
da2: 600.000MB/s transfers
da2: Command Queueing enabled
da2: 7630885MB (15628053168 512 byte sectors)
da1: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da1: Serial Number VKKNMZWY
da1: 600.000MB/s transfersda3: <SanDisk Cruzer Spark 1.00> Removable Direct Access SPC-4 SCSI device
da3: Serial Number 4C530000321220116314
da3: 40.000MB/s transfers
da3: 29340MB (60088320 512 byte sectors)
da3: quirks=0x2<NO_6_BYTE>
da1: Command Queueing enabled
da1: 7630885MB (15628053168 512 byte sectors)
da4 at umass-sim1 bus 1 scbus3 target 0 lun 0
da4: <SanDisk Cruzer Spark 1.00> Removable Direct Access SPC-4 SCSI device
da4: Serial Number 4C530000281220116314
da4: 40.000MB/s transfers
da4: 29340MB (60088320 512 byte sectors)
da4: quirks=0x2<NO_6_BYTE>

Code:
root@freenas[~]# zdb -l /dev/da1
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
failed to unpack label 2
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3
root@freenas[~]# zdb -l /dev/da2
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
failed to unpack label 2
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3
root@freenas[~]# zdb -l /dev/da3
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
failed to unpack label 2
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3
root@freenas[~]# zdb -l /dev/da4
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
failed to unpack label 2
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
Code:

root@freenas[~]# fdisk /dev/da0
******* Working on device /dev/da0 *******
parameters extracted from in-core disklabel are:
cylinders=972801 heads=255 sectors/track=63 (16065 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=972801 heads=255 sectors/track=63 (16065 blks/cyl)

fdisk: invalid fdisk partition table found
Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
    start 63, size 4294961622 (2097149 Meg), flag 80 (active)
        beg: cyl 0/ head 1/ sector 1;
        end: cyl 84/ head 254/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>
root@freenas[~]# fdisk /dev/da1
******* Working on device /dev/da1 *******
parameters extracted from in-core disklabel are:
cylinders=972801 heads=255 sectors/track=63 (16065 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=972801 heads=255 sectors/track=63 (16065 blks/cyl)

fdisk: invalid fdisk partition table found
Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
    start 63, size 4294961622 (2097149 Meg), flag 80 (active)
        beg: cyl 0/ head 1/ sector 1;
        end: cyl 84/ head 254/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>
root@freenas[~]# fdisk /dev/da2
******* Working on device /dev/da2 *******
parameters extracted from in-core disklabel are:
cylinders=972801 heads=255 sectors/track=63 (16065 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=972801 heads=255 sectors/track=63 (16065 blks/cyl)

fdisk: invalid fdisk partition table found
Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
    start 63, size 4294961622 (2097149 Meg), flag 80 (active)
        beg: cyl 0/ head 1/ sector 1;
        end: cyl 84/ head 254/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>
root@freenas[~]# fdisk /dev/da3
******* Working on device /dev/da3 *******
parameters extracted from in-core disklabel are:
cylinders=3740 heads=255 sectors/track=63 (16065 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=3740 heads=255 sectors/track=63 (16065 blks/cyl)

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 238 (0xee),(EFI GPT)
    start 1, size 60088319 (29339 Meg), flag 80 (active)
        beg: cyl 0/ head 0/ sector 2;
        end: cyl 1023/ head 255/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>
root@freenas[~]# fdisk /dev/da4
******* Working on device /dev/da4 *******
parameters extracted from in-core disklabel are:
cylinders=3740 heads=255 sectors/track=63 (16065 blks/cyl)

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=3740 heads=255 sectors/track=63 (16065 blks/cyl)

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 238 (0xee),(EFI GPT)
    start 1, size 60088319 (29339 Meg), flag 80 (active)
        beg: cyl 0/ head 0/ sector 2;
        end: cyl 1023/ head 255/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>


Code:

root@freenas[~]# geom disk list
Geom name: da0
Providers:
1. Name: da0
   Mediasize: 8001563222016 (7.3T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: ATA WDC WD80EFZX-68U
   lunid: 5000cca263c957a0
   ident: R6GNK36Y
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Geom name: da1
Providers:
1. Name: da1
   Mediasize: 8001563222016 (7.3T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: ATA WDC WD80EFZX-68U
   lunid: 5000cca254f3a8ff
   ident: VKKNMZWY
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Geom name: da2
Providers:
1. Name: da2
   Mediasize: 8001563222016 (7.3T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: ATA WDC WD80EFZX-68U
   lunid: 5000cca254ed7c85
   ident: VKK71Z6Y
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Geom name: da3
Providers:
1. Name: da3
   Mediasize: 30765219840 (29G)
   Sectorsize: 512
   Mode: r1w1e2
   descr: SanDisk Cruzer Spark
   lunname: SanDisk Cruzer Spark    4C530000321220116314
   lunid: SanDisk Cruzer Spark    4C530000321220116314
   ident: 4C530000321220116314
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da4
Providers:
1. Name: da4
   Mediasize: 30765219840 (29G)
   Sectorsize: 512
   Mode: r1w1e2
   descr: SanDisk Cruzer Spark
   lunname: SanDisk Cruzer Spark    4C530000281220116314
   lunid: SanDisk Cruzer Spark    4C530000281220116314
   ident: 4C530000281220116314
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
Code:

root@freenas[~]# ls -al /dev/gptid
total 1
dr-xr-xr-x   2 root  wheel      512 Feb 13 15:29 .
dr-xr-xr-x  11 root  wheel      512 Feb 13 15:29 ..
crw-r-----   1 root  operator  0xb0 Feb 13 15:29 a65ac8bc-4eb6-11ea-95d9-246e966daee0
crw-r-----   1 root  operator  0xb2 Feb 13 15:29 a71d065c-4eb6-11ea-95d9-246e966daee0


Code:

root@freenas[~]# ls -al /dev/gptid
total 1
dr-xr-xr-x   2 root  wheel      512 Feb 13 15:29 .
dr-xr-xr-x  11 root  wheel      512 Feb 13 15:29 ..
crw-r-----   1 root  operator  0xb0 Feb 13 15:29 a65ac8bc-4eb6-11ea-95d9-246e966daee0
crw-r-----   1 root  operator  0xb2 Feb 13 15:29 a71d065c-4eb6-11ea-95d9-246e966daee0
root@freenas[~]# dmesg | grep -iE "ata|ahci|geom"
mps0: SAS Address for SATA device = 3555275fc98eb760
mps0: SAS Address for SATA device = d155485fc98eb777
mps0: SAS Address for SATA device = 4f2e275fc579b377
mps0: SAS Address from SATA device = 3555275fc98eb760
mps0: SAS Address from SATA device = d155485fc98eb777
mps0: SAS Address from SATA device = 4f2e275fc579b377
da0: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da2: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da1: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device


Code:

root@freenas[~]# dmesg
Copyright (c) 1992-2019 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 11.3-RELEASE-p5 #0 r325575+8ed1cd24b60(HEAD): Mon Jan 27 18:07:23 UTC 2020
    root@tnbuild02.tn.ixsystems.com:/freenas-releng/freenas/_BE/objs/freenas-releng/freenas/_BE/os/sys/FreeNAS.amd64 amd64
FreeBSD clang version 8.0.0 (tags/RELEASE_800/final 356365) (based on LLVM 8.0.0)
VT(vga): text 80x25
CPU: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (2600.06-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x306e4  Family=0x6  Model=0x3e  Stepping=4
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x7fbee3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND>
  AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  Structured Extended Features=0x281<FSGSBASE,SMEP,ERMS>
  Structured Extended Features3=0x9c000000<IBPB,STIBP,L1DFL,SSBD>
  XSAVE Features=0x1<XSAVEOPT>
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr
  TSC: P-state invariant, performance statistics
real memory  = 137438953472 (131072 MB)
avail memory = 133561470976 (127374 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: <DELL   PE_SC3  >
FreeBSD/SMP: Multiprocessor System Detected: 32 CPUs
FreeBSD/SMP: 2 package(s) x 8 core(s) x 2 hardware threads
WARNING: VIMAGE (virtualized network stack) is a highly experimental feature.
ioapic1: Changing APIC ID to 1
ioapic2: Changing APIC ID to 2
ioapic0 <Version 2.0> irqs 0-23 on motherboard
ioapic1 <Version 2.0> irqs 32-55 on motherboard
ioapic2 <Version 2.0> irqs 64-87 on motherboard
SMP: AP CPU #31 Launched!
SMP: AP CPU #16 Launched!
SMP: AP CPU #6 Launched!
SMP: AP CPU #13 Launched!
SMP: AP CPU #20 Launched!
SMP: AP CPU #26 Launched!
SMP: AP CPU #15 Launched!
SMP: AP CPU #28 Launched!
SMP: AP CPU #17 Launched!
SMP: AP CPU #30 Launched!
SMP: AP CPU #1 Launched!
SMP: AP CPU #27 Launched!
SMP: AP CPU #14 Launched!
SMP: AP CPU #29 Launched!
SMP: AP CPU #18 Launched!
SMP: AP CPU #11 Launched!
SMP: AP CPU #5 Launched!
SMP: AP CPU #21 Launched!
SMP: AP CPU #12 Launched!
SMP: AP CPU #24 Launched!
SMP: AP CPU #10 Launched!
SMP: AP CPU #9 Launched!
SMP: AP CPU #7 Launched!
SMP: AP CPU #22 Launched!
SMP: AP CPU #19 Launched!
SMP: AP CPU #3 Launched!
SMP: AP CPU #25 Launched!
SMP: AP CPU #2 Launched!
SMP: AP CPU #4 Launched!
SMP: AP CPU #8 Launched!
SMP: AP CPU #23 Launched!
Timecounter "TSC-low" frequency 1300028251 Hz quality 1000
random: entropy device external interface
random: registering fast source Intel Secure Key RNG
random: fast provider: "Intel Secure Key RNG"
kbd1 at kbdmux0
mlx5en: Mellanox Ethernet driver 3.5.1 (April 2019)
nexus0
vtvga0: <VT VGA driver> on motherboard
cryptosoft0: <software crypto> on motherboard
aesni0: <AES-CBC,AES-XTS,AES-GCM,AES-ICM> on motherboard
padlock0: No ACE support.
acpi0: <DELL PE_SC3> on motherboard
acpi0: Power Button (fixed)
ipmi0: <IPMI System Interface> port 0xca8,0xcac irq 10 on acpi0
ipmi0: KCS mode found at io 0xca8 on acpi
cpu0: <ACPI CPU> on acpi0
cpu1: <ACPI CPU> on acpi0
cpu2: <ACPI CPU> on acpi0
cpu3: <ACPI CPU> on acpi0
cpu4: <ACPI CPU> on acpi0
cpu5: <ACPI CPU> on acpi0
cpu6: <ACPI CPU> on acpi0
cpu7: <ACPI CPU> on acpi0
cpu8: <ACPI CPU> on acpi0
cpu9: <ACPI CPU> on acpi0
cpu10: <ACPI CPU> on acpi0
cpu11: <ACPI CPU> on acpi0
cpu12: <ACPI CPU> on acpi0
cpu13: <ACPI CPU> on acpi0
cpu14: <ACPI CPU> on acpi0
cpu15: <ACPI CPU> on acpi0
cpu16: <ACPI CPU> on acpi0
cpu17: <ACPI CPU> on acpi0
cpu18: <ACPI CPU> on acpi0
cpu19: <ACPI CPU> on acpi0
cpu20: <ACPI CPU> on acpi0
cpu21: <ACPI CPU> on acpi0
cpu22: <ACPI CPU> on acpi0
cpu23: <ACPI CPU> on acpi0
cpu24: <ACPI CPU> on acpi0
cpu25: <ACPI CPU> on acpi0
cpu26: <ACPI CPU> on acpi0
cpu27: <ACPI CPU> on acpi0
cpu28: <ACPI CPU> on acpi0
cpu29: <ACPI CPU> on acpi0
cpu30: <ACPI CPU> on acpi0
cpu31: <ACPI CPU> on acpi0
atrtc0: <AT realtime clock> port 0x70-0x7f irq 8 on acpi0
atrtc0: registered as a time-of-day clock, resolution 1.000000s
Event timer "RTC" frequency 32768 Hz quality 0
attimer0: <AT timer> port 0x40-0x5f irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0
Timecounter "HPET" frequency 14318180 Hz quality 950
Event timer "HPET" frequency 14318180 Hz quality 350
Event timer "HPET1" frequency 14318180 Hz quality 340
Event timer "HPET2" frequency 14318180 Hz quality 340
Event timer "HPET3" frequency 14318180 Hz quality 340
Event timer "HPET4" frequency 14318180 Hz quality 340
Event timer "HPET5" frequency 14318180 Hz quality 340
Event timer "HPET6" frequency 14318180 Hz quality 340
Event timer "HPET7" frequency 14318180 Hz quality 340
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x808-0x80b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
pcib1: <ACPI PCI-PCI bridge> irq 53 at device 1.0 on pci0
pci1: <ACPI PCI bus> on pcib1
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 3.2.12-k> port 0xfcc0-0xfcdf mem 0xdcd00000-0xdcdfffff,0xdcff8000-0xdcffbfff irq 36 at device 0.0 on pci1
ix0: Using MSI-X interrupts with 9 vectors
ix0: Ethernet address: 24:6e:96:6d:ae:e0
ix0: PCI Express Bus: Speed 5.0GT/s Width x8
ix1: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 3.2.12-k> port 0xfce0-0xfcff mem 0xdce00000-0xdcefffff,0xdcffc000-0xdcffffff irq 34 at device 0.1 on pci1
ix1: Using MSI-X interrupts with 9 vectors
ix1: Ethernet address: 24:6e:96:6d:ae:e2
ix1: PCI Express Bus: Speed 5.0GT/s Width x8
pcib2: <ACPI PCI-PCI bridge> irq 53 at device 2.0 on pci0
pci2: <ACPI PCI bus> on pcib2
pcib3: <ACPI PCI-PCI bridge> irq 53 at device 2.2 on pci0
pci3: <ACPI PCI bus> on pcib3
pcib4: <ACPI PCI-PCI bridge> irq 53 at device 3.0 on pci0
pci4: <ACPI PCI bus> on pcib4
pcib5: <PCI-PCI bridge> irq 16 at device 17.0 on pci0
pci5: <PCI bus> on pcib5
pci0: <simple comms> at device 22.0 (no driver attached)
pci0: <simple comms> at device 22.1 (no driver attached)
ehci0: <Intel Patsburg USB 2.0 controller> mem 0xdf8fd000-0xdf8fd3ff irq 23 at device 26.0 on pci0
usbus0: EHCI version 1.0
usbus0 on ehci0
usbus0: 480Mbps High Speed USB v2.0
pcib6: <ACPI PCI-PCI bridge> at device 28.0 on pci0
pci6: <ACPI PCI bus> on pcib6
pcib7: <ACPI PCI-PCI bridge> irq 16 at device 28.4 on pci0
pci7: <ACPI PCI bus> on pcib7
igb0: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> mem 0xdde80000-0xddefffff,0xddff8000-0xddffbfff irq 19 at device 0.0 on pci7
igb0: Using MSIX interrupts with 9 vectors
igb0: Ethernet address: 24:6e:96:6d:ae:e4
igb0: Bound queue 0 to cpu 0
igb0: Bound queue 1 to cpu 1
igb0: Bound queue 2 to cpu 2
igb0: Bound queue 3 to cpu 3
igb0: Bound queue 4 to cpu 4
igb0: Bound queue 5 to cpu 5
igb0: Bound queue 6 to cpu 6
igb0: Bound queue 7 to cpu 7
igb1: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> mem 0xddf00000-0xddf7ffff,0xddffc000-0xddffffff irq 18 at device 0.1 on pci7
igb1: Using MSIX interrupts with 9 vectors
igb1: Ethernet address: 24:6e:96:6d:ae:e5
igb1: Bound queue 0 to cpu 8
igb1: Bound queue 1 to cpu 9
igb1: Bound queue 2 to cpu 10
igb1: Bound queue 3 to cpu 11
igb1: Bound queue 4 to cpu 12
igb1: Bound queue 5 to cpu 13
igb1: Bound queue 6 to cpu 14
igb1: Bound queue 7 to cpu 15
pcib8: <ACPI PCI-PCI bridge> irq 19 at device 28.7 on pci0
pci8: <ACPI PCI bus> on pcib8
pcib9: <PCI-PCI bridge> at device 0.0 on pci8
pci9: <PCI bus> on pcib9
pcib10: <PCI-PCI bridge> at device 0.0 on pci9
pci10: <PCI bus> on pcib10
pcib11: <PCI-PCI bridge> at device 0.0 on pci10
pci11: <PCI bus> on pcib11
vgapci0: <VGA-compatible display> mem 0xd8000000-0xd8ffffff,0xdeffc000-0xdeffffff,0xde000000-0xde7fffff irq 19 at device 0.0 on pci11
vgapci0: Boot video device
pcib12: <PCI-PCI bridge> at device 1.0 on pci9
pci12: <PCI bus> on pcib12
ehci1: <Intel Patsburg USB 2.0 controller> mem 0xdf8fe000-0xdf8fe3ff irq 22 at device 29.0 on pci0
usbus1: EHCI version 1.0
usbus1 on ehci1
usbus1: 480Mbps High Speed USB v2.0
pcib13: <PCI-PCI bridge> at device 30.0 on pci0
pci13: <PCI bus> on pcib13
isab0: <PCI-ISA bridge> at device 31.0 on pci0
isa0: <ISA bus> on isab0
pcib14: <ACPI Host-PCI bridge> on acpi0
pci14: <ACPI PCI bus> on pcib14
pcib15: <ACPI PCI-PCI bridge> irq 85 at device 1.0 on pci14
pci15: <ACPI PCI bus> on pcib15
pcib16: <ACPI PCI-PCI bridge> irq 85 at device 2.0 on pci14
pci16: <ACPI PCI bus> on pcib16
pcib17: <ACPI PCI-PCI bridge> irq 85 at device 3.0 on pci14
pci17: <ACPI PCI bus> on pcib17
mps0: <Avago Technologies (LSI) SAS2308> port 0xdc00-0xdcff mem 0xd4ff0000-0xd4ffffff,0xd4f80000-0xd4fbffff irq 80 at device 0.0 on pci17
mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd
mps0: IOCCapabilities: 5a85c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,MSIXIndex,HostDisc>
pcib18: <ACPI PCI-PCI bridge> irq 85 at device 3.2 on pci14
pci18: <ACPI PCI bus> on pcib18
pcib19: <ACPI Host-PCI bridge> on acpi0
pci19: <ACPI PCI bus> on pcib19
pci19: <dasp, performance counters> at device 14.1 (no driver attached)
pci19: <dasp, performance counters> at device 19.1 (no driver attached)
pci19: <dasp, performance counters> at device 19.5 (no driver attached)
pcib20: <ACPI Host-PCI bridge> on acpi0
pci20: <ACPI PCI bus> on pcib20
pci20: <dasp, performance counters> at device 14.1 (no driver attached)
pci20: <dasp, performance counters> at device 19.1 (no driver attached)
pci20: <dasp, performance counters> at device 19.5 (no driver attached)
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0
ichwd0: <Intel Patsburg watchdog timer> on isa0
orm0: <ISA Option ROMs> at iomem 0xc0000-0xc7fff,0xec000-0xeffff on isa0
coretemp0: <CPU On-Die Thermal Sensors> on cpu0
est0: <Enhanced SpeedStep Frequency Control> on cpu0
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est0 attach returned 6
coretemp1: <CPU On-Die Thermal Sensors> on cpu1
est1: <Enhanced SpeedStep Frequency Control> on cpu1
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est1 attach returned 6
coretemp2: <CPU On-Die Thermal Sensors> on cpu2
est2: <Enhanced SpeedStep Frequency Control> on cpu2
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est2 attach returned 6
coretemp3: <CPU On-Die Thermal Sensors> on cpu3
est3: <Enhanced SpeedStep Frequency Control> on cpu3
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est3 attach returned 6
coretemp4: <CPU On-Die Thermal Sensors> on cpu4
est4: <Enhanced SpeedStep Frequency Control> on cpu4
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est4 attach returned 6
coretemp5: <CPU On-Die Thermal Sensors> on cpu5
est5: <Enhanced SpeedStep Frequency Control> on cpu5
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est5 attach returned 6
coretemp6: <CPU On-Die Thermal Sensors> on cpu6
est6: <Enhanced SpeedStep Frequency Control> on cpu6
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est6 attach returned 6
coretemp7: <CPU On-Die Thermal Sensors> on cpu7
est7: <Enhanced SpeedStep Frequency Control> on cpu7
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 214e00001e00
device_attach: est7 attach returned 6
coretemp8: <CPU On-Die Thermal Sensors> on cpu8
est8: <Enhanced SpeedStep Frequency Control> on cpu8
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est8 attach returned 6
coretemp9: <CPU On-Die Thermal Sensors> on cpu9
est9: <Enhanced SpeedStep Frequency Control> on cpu9
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est9 attach returned 6
coretemp10: <CPU On-Die Thermal Sensors> on cpu10
est10: <Enhanced SpeedStep Frequency Control> on cpu10
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est10 attach returned 6
coretemp11: <CPU On-Die Thermal Sensors> on cpu11
est11: <Enhanced SpeedStep Frequency Control> on cpu11
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est11 attach returned 6
coretemp12: <CPU On-Die Thermal Sensors> on cpu12
est12: <Enhanced SpeedStep Frequency Control> on cpu12
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est12 attach returned 6
coretemp13: <CPU On-Die Thermal Sensors> on cpu13
est13: <Enhanced SpeedStep Frequency Control> on cpu13
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est13 attach returned 6
coretemp14: <CPU On-Die Thermal Sensors> on cpu14
est14: <Enhanced SpeedStep Frequency Control> on cpu14
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est14 attach returned 6
coretemp15: <CPU On-Die Thermal Sensors> on cpu15
est15: <Enhanced SpeedStep Frequency Control> on cpu15
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est15 attach returned 6
coretemp16: <CPU On-Die Thermal Sensors> on cpu16
est16: <Enhanced SpeedStep Frequency Control> on cpu16
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est16 attach returned 6
coretemp17: <CPU On-Die Thermal Sensors> on cpu17
est17: <Enhanced SpeedStep Frequency Control> on cpu17
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est17 attach returned 6
coretemp18: <CPU On-Die Thermal Sensors> on cpu18
est18: <Enhanced SpeedStep Frequency Control> on cpu18
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est18 attach returned 6
coretemp19: <CPU On-Die Thermal Sensors> on cpu19
est19: <Enhanced SpeedStep Frequency Control> on cpu19
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est19 attach returned 6
coretemp20: <CPU On-Die Thermal Sensors> on cpu20
est20: <Enhanced SpeedStep Frequency Control> on cpu20
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est20 attach returned 6
coretemp21: <CPU On-Die Thermal Sensors> on cpu21
est21: <Enhanced SpeedStep Frequency Control> on cpu21
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est21 attach returned 6
coretemp22: <CPU On-Die Thermal Sensors> on cpu22
est22: <Enhanced SpeedStep Frequency Control> on cpu22
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est22 attach returned 6
coretemp23: <CPU On-Die Thermal Sensors> on cpu23
est23: <Enhanced SpeedStep Frequency Control> on cpu23
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est23 attach returned 6
coretemp24: <CPU On-Die Thermal Sensors> on cpu24
est24: <Enhanced SpeedStep Frequency Control> on cpu24
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est24 attach returned 6
coretemp25: <CPU On-Die Thermal Sensors> on cpu25
est25: <Enhanced SpeedStep Frequency Control> on cpu25
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est25 attach returned 6
coretemp26: <CPU On-Die Thermal Sensors> on cpu26
est26: <Enhanced SpeedStep Frequency Control> on cpu26
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est26 attach returned 6
coretemp27: <CPU On-Die Thermal Sensors> on cpu27
est27: <Enhanced SpeedStep Frequency Control> on cpu27
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est27 attach returned 6
coretemp28: <CPU On-Die Thermal Sensors> on cpu28
est28: <Enhanced SpeedStep Frequency Control> on cpu28
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est28 attach returned 6
coretemp29: <CPU On-Die Thermal Sensors> on cpu29
est29: <Enhanced SpeedStep Frequency Control> on cpu29
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est29 attach returned 6
coretemp30: <CPU On-Die Thermal Sensors> on cpu30
est30: <Enhanced SpeedStep Frequency Control> on cpu30
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est30 attach returned 6
coretemp31: <CPU On-Die Thermal Sensors> on cpu31
est31: <Enhanced SpeedStep Frequency Control> on cpu31
est: CPU supports Enhanced Speedstep, but is not recognized.
est: cpu_vendor GenuineIntel, msr 217700001e00
device_attach: est31 attach returned 6
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
Timecounters tick every 1.000 msec
freenas_sysctl: adding account.
freenas_sysctl: adding directoryservice.
freenas_sysctl: adding middlewared.
freenas_sysctl: adding network.
freenas_sysctl: adding services.
ipfw2 (+ipv6) initialized, divert enabled, nat enabled, default to accept, logging disabled
ipmi0: IPMI device rev. 1, firmware rev. 1.66, version 2.0
ipmi0: Number of channels 6
ipmi0: Attached watchdog
ugen1.1: <Intel EHCI root HUB> at usbus1
ugen0.1: <Intel EHCI root HUB> at usbus0
uhub0: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus1
uhub1: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus0
mps0: SAS Address for SATA device = 3555275fc98eb760
mps0: SAS Address for SATA device = d155485fc98eb777
mps0: SAS Address for SATA device = 4f2e275fc579b377
mps0: SAS Address from SATA device = 3555275fc98eb760
mps0: SAS Address from SATA device = d155485fc98eb777
mps0: SAS Address from SATA device = 4f2e275fc579b377
uhub0: 2 ports with 2 removable, self powered
uhub1: 2 ports with 2 removable, self powered
ugen1.2: <vendor 0x8087 product 0x0024> at usbus1
uhub2 on uhub0
uhub2: <vendor 0x8087 product 0x0024, class 9/0, rev 2.00/0.00, addr 2> on usbus1
ugen0.2: <vendor 0x8087 product 0x0024> at usbus0
uhub3 on uhub1
uhub3: <vendor 0x8087 product 0x0024, class 9/0, rev 2.00/0.00, addr 2> on usbus0
uhub3: 6 ports with 6 removable, self powered
uhub2: 8 ports with 8 removable, self powered
ugen1.3: <Cypress PS2 to USB Adapter> at usbus1
ukbd0 on uhub2
ukbd0: <HID Keyboard> on usbus1
kbd2 at ukbd0
ugen0.3: <SanDisk Cruzer Spark> at usbus0
umass0 on uhub3
umass0: <SanDisk Cruzer Spark, class 0/0, rev 2.00/1.00, addr 3> on usbus0
umass0:  SCSI over Bulk-Only; quirks = 0xc100
umass0:2:0: Attached to scbus2
ugen0.4: <vendor 0x0424 product 0x2512> at usbus0
uhub4 on uhub3
uhub4: <vendor 0x0424 product 0x2512, class 9/0, rev 2.00/b.b3, addr 4> on usbus0
uhub4: MTT enabled
uhub4: 1 port with 1 removable, self powered
ugen0.5: <no manufacturer Gadget USB HUB> at usbus0
uhub5 on uhub3
uhub5: <no manufacturer Gadget USB HUB, class 9/0, rev 2.00/0.00, addr 5> on usbus0
uhub5: 6 ports with 6 removable, self powered
ugen0.6: <Avocent KeyboardMouse Function> at usbus0
ukbd1 on uhub5
ukbd1: <Keyboard> on usbus0
kbd3 at ukbd1
ugen0.7: <SanDisk Cruzer Spark> at usbus0
umass1 on uhub4
umass1: <SanDisk Cruzer Spark, class 0/0, rev 2.00/1.00, addr 7> on usbus0
umass1:  SCSI over Bulk-Only; quirks = 0xc100
umass1:3:1: Attached to scbus3
da0 at mps0 bus 0 scbus0 target 14 lun 0
da1 at mps0 bus 0 scbus0 target 15 lun 0
da2 at mps0 bus 0 scbus0 target 16 lun 0
da0: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da0: Serial Number R6GNK36Y
da0: 600.000MB/s transfers
da0: Command Queueing enabled
da0: 7630885MB (15628053168 512 byte sectors)
da2: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da2: Serial Number VKK71Z6Y
da3 at umass-sim0 bus 0 scbus2 target 0 lun 0
da2: 600.000MB/s transfers
da2: Command Queueing enabled
da2: 7630885MB (15628053168 512 byte sectors)
da1: <ATA WDC WD80EFZX-68U 0A83> Fixed Direct Access SPC-4 SCSI device
da1: Serial Number VKKNMZWY
da1: 600.000MB/s transfersda3: <SanDisk Cruzer Spark 1.00> Removable Direct Access SPC-4 SCSI device
da3: Serial Number 4C530000321220116314
da3: 40.000MB/s transfers
da3: 29340MB (60088320 512 byte sectors)
da3: quirks=0x2<NO_6_BYTE>

da1: Command Queueing enabled
da1: 7630885MB (15628053168 512 byte sectors)
da4 at umass-sim1 bus 1 scbus3 target 0 lun 0
da4: <SanDisk Cruzer Spark 1.00> Removable Direct Access SPC-4 SCSI device
da4: Serial Number 4C530000281220116314
da4: 40.000MB/s transfers
da4: 29340MB (60088320 512 byte sectors)
da4: quirks=0x2<NO_6_BYTE>
random: unblocking device.
Trying to mount root from zfs:freenas-boot/ROOT/default []...
lo0: link state changed to UP
hwpmc: SOFT/16/64/0x67<INT,USR,SYS,REA,WRI> TSC/1/64/0x20<REA> IAP/4/48/0x3ff<INT,USR,SYS,EDG,THR,REA,WRI,INV,QUA,PRC> IAF/3/48/0x67<INT,USR,SYS,REA,WRI>
ix0: link state changed to UP
ix1: link state changed to UP
ums0 on uhub5
ums1 on uhub2
ums0: <Mouse> on usbus0
ums0: 3 buttons and [Z] coordinates ID=0
ums2 on uhub5
ums2: <Mouse REL> on usbus0
ums2: 3 buttons and [XYZ] coordinates ID=0
ums1: <HID Mouse> on usbus1
ums1: 3 buttons and [XYZ] coordinates ID=0
CPU: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (2600.06-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x306e4  Family=0x6  Model=0x3e  Stepping=4
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x7fbee3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND>
  AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  Structured Extended Features=0x281<FSGSBASE,SMEP,ERMS>
  Structured Extended Features3=0x9c000400<MD_CLEAR,IBPB,STIBP,L1DFL,SSBD>
  XSAVE Features=0x1<XSAVEOPT>
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID,VID,PostIntr
  TSC: P-state invariant, performance statistics


Code:

root@freenas[~]# gpart recover /dev/da0
gpart: arg0 'da0': Invalid argument
 
Joined
Oct 18, 2018
Messages
969
The geli attach command:

Whenever I tried this I would get a "Cannot read metadata" error:
Code:
root@freenas[~/downloads]# geli attach -k pool_eightTB.spinners_encryption.key -j pool_eightTB.spinners_encryption.pw /dev/da0
geli: Cannot read metadata from /dev/da0: Invalid argument.

* the pool_fourTBs_encryption.pw is a single-line text file with the password* after decrypting drives I was able to import pool in webGUI by Pool -> Add -> Import -> "No" to decrypt (select unencrypted) -> select pool in dropdown* Note that geli will not decrypt with /dev/da0 or /dev/da0p1 but it WILL decrypt /dev/da0p2**** The keys is the "p2" at the end ***
Though this worked, this is not ideal. FreeNAS uses the devices found in /dev/gptid when it creates pools and stores information about them. Typically what I do when I manually unlock devices is I identify the pool first by sqlite3 /data/freenas-v1.db 'select * from storage_volume;' and then the disks by comparing the second column in the prior command with the ids in this command sqlite3 /data/freenas-v1.db 'select * from storage_encrypteddisk;'. I now have the devices to unlock, look in /dev/gptid and drop the .eli. note: I wrote this without server access so if the commands above do not work there may be a simple typo.

Of course, keep in mind that because you unlocked the devices and then imported it your FreeNAS system does not know the devices are encrypted. Thus it will not treat the pool that way. When you reboot, it will not unlock it for you. I consider manually unlocking then importing either a debugging step or something just to get access to the data to then back up the data.
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
Though this worked, this is not ideal. FreeNAS uses the devices found in /dev/gptid when it creates pools and stores information about them. Typically what I do when I manually unlock devices is I identify the pool first by sqlite3 /data/freenas-v1.db 'select * from storage_volume;' and then the disks by comparing the second column in the prior command with the ids in this command sqlite3 /data/freenas-v1.db 'select * from storage_encrypteddisk;'. I now have the devices to unlock, look in /dev/gptid and drop the .eli. note: I wrote this without server access so if the commands above do not work there may be a simple typo.

Of course, keep in mind that because you unlocked the devices and then imported it your FreeNAS system does not know the devices are encrypted. Thus it will not treat the pool that way. When you reboot, it will not unlock it for you. I consider manually unlocking then importing either a debugging step or something just to get access to the data to then back up the data.

Yeah @PhiloEpisteme.... I tried before & just to make sure I tried again.
... This time took down some careful notes (below)
... (for others that are reading this... this is on my old/original FreeNAS box)

Code:
root@freenas[~]# ls -al /dev/gptid
total 1
dr-xr-xr-x   2 root  wheel      512 Feb 14 14:30 .
dr-xr-xr-x  11 root  wheel      512 Feb 14 14:30 ..
crw-r-----   1 root  operator  0xc5 Feb 14 14:30 0d47ded6-fe94-11e9-9e4a-ecf4bbe54910
crw-r-----   1 root  operator  0xc3 Feb 14 14:30 d9ced662-ebc6-11e9-8136-ecf4bbe54910

Code:
root@freenas[~]# sqlite3 /data/freenas-v1.db 'select * from storage_volume;'
6|tenTB.spinners|15172541474628519820|2|0c611a6c-00b0-4aff-a50e-7185382754ad

Code:
root@freenas[~]# ls -al /data/geli
total 5
drwxrwxrwx  2 root  www   4 Feb 12 00:30 .
drwxr-xr-x  8 www   www  14 Feb 14 14:32 ..
-rw-r--r--  1 root  www  64 Jan 29 01:04 0c611a6c-00b0-4aff-a50e-7185382754ad.key << I see; so NOT this one
-rw-rw-rw-  1 root  www  64 Jan 22 21:34 f172b9d1-f862-4a6c-84fc-686822681c44.key << I do not see; rename this one

Code:
root@freenas[~]# mv /data/geli/f172b9d1-f862-4a6c-84fc-686822681c44.key /data/geli/f172b9d1-f862-4a6c-84fc-686822681c44.key.bak

Then try to import via-webUI but it still fails
... remember pool is encrypted + password
Code:
FAILED
[EFAULT] Pool could not be imported: 12 devices failed to decrypt.

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 95, in main_worker
    res = loop.run_until_complete(coro)
  File "/usr/local/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 51, in _run
    return await self._call(name, serviceobj, methodobj, params=args, job=job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 43, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 43, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 964, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 382, in import_pool
    zfs.import_pool(found, found.name, options, any_host=any_host)
  File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 380, in import_pool
    raise CallError(f'Pool {name_or_guid} not found.', errno.ENOENT)
middlewared.service_exception.CallError: [ENOENT] Pool 15172541474628519820 not found.
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1656, in unlock
    'cachefile': ZPOOL_CACHE_FILE,
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1127, in call
    app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1074, in _call
    return await self._call_worker(name, *args)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1094, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1029, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1003, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [ENOENT] Pool 15172541474628519820 not found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 349, in run
    await self.future
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 386, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 960, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1668, in unlock
    raise CallError(msg)
middlewared.service_exception.CallError: [EFAULT] Pool could not be imported: 12 devices failed to decrypt.

You mentioned another sqlite3 command
Code:
root@freenas[~]# sqlite3 /data/freenas-v1.db 'select * from storage_encrypteddisk;'
8|6|{serial_lunid}ZA2AR1SZ_5000c500b5774bb1|gptid/9d08ec7d-29da-11ea-85f1-ecf4bbe54910
9|6|{serial_lunid}ZA2AR1T8_5000c500b5774c3a|gptid/9e8c39ab-29da-11ea-85f1-ecf4bbe54910
10|6|{serial_lunid}ZA2AR1VE_5000c500b5774836|gptid/a0154bd8-29da-11ea-85f1-ecf4bbe54910
11|6|{serial_lunid}ZA2AR1YX_5000c500b5777805|gptid/a1a0ad56-29da-11ea-85f1-ecf4bbe54910
12|6|{serial_lunid}ZA2BH7H5_5000c500b69731fe|gptid/a32de446-29da-11ea-85f1-ecf4bbe54910
13|6|{serial_lunid}ZA2BHMTP_5000c500b698a83f|gptid/a4c04662-29da-11ea-85f1-ecf4bbe54910
14|6|{serial_lunid}JEGX7DTN            _5000cca267ccd637|gptid/715b564f-3594-11ea-bdf6-ecf4bbe54910
15|6|{serial_lunid}JEGWU6VM            _5000cca267cca4b2|gptid/73796f51-3594-11ea-bdf6-ecf4bbe54910
16|6|{serial_lunid}1DGSXP2Z            _5000cca26ccae032|gptid/7597e9ac-3594-11ea-bdf6-ecf4bbe54910
17|6|{serial_lunid}JEGWSB8N            _5000cca267cc9db9|gptid/77e4dfe0-3594-11ea-bdf6-ecf4bbe54910
18|6|{serial_lunid}2YGL1RGD            _5000cca273c83527|gptid/7a03f6ef-3594-11ea-bdf6-ecf4bbe54910
19|6|{serial_lunid}JEGGEBUN            _5000cca267c68f6d|gptid/7c1e804e-3594-11ea-bdf6-ecf4bbe54910

& just for good measure
Code:
root@freenas[~]# ls -al /dev/da*
crw-r-----  1 root  operator  0xa8 Feb 14 14:30 /dev/da0
crw-r-----  1 root  operator  0xa9 Feb 14 14:30 /dev/da1
crw-r-----  1 root  operator  0xb2 Feb 14 14:30 /dev/da10
crw-r-----  1 root  operator  0xb3 Feb 14 14:30 /dev/da11
crw-r-----  1 root  operator  0xbd Feb 14 14:30 /dev/da12   << BOOT
crw-r-----  1 root  operator  0xbf Feb 14 14:30 /dev/da12p1 << BOOT
crw-r-----  1 root  operator  0xc0 Feb 14 14:30 /dev/da12p2 << BOOT
crw-r-----  1 root  operator  0xbe Feb 14 14:30 /dev/da13   << BOOT
crw-r-----  1 root  operator  0xc1 Feb 14 14:30 /dev/da13p1 << BOOT
crw-r-----  1 root  operator  0xc2 Feb 14 14:30 /dev/da13p2 << BOOT
crw-r-----  1 root  operator  0xaa Feb 14 14:30 /dev/da2
crw-r-----  1 root  operator  0xab Feb 14 14:30 /dev/da3
crw-r-----  1 root  operator  0xac Feb 14 14:30 /dev/da4
crw-r-----  1 root  operator  0xad Feb 14 14:30 /dev/da5
crw-r-----  1 root  operator  0xae Feb 14 14:30 /dev/da6
crw-r-----  1 root  operator  0xaf Feb 14 14:30 /dev/da7
crw-r-----  1 root  operator  0xb0 Feb 14 14:30 /dev/da8
crw-r-----  1 root  operator  0xb1 Feb 14 14:30 /dev/da9


One thing I don't quite understand is based on your instructions I should rename the *.key that I NOT see in the sqlite3 command. I would think I should be renaming the one that IS in the sqlite3 command to forces some sort of reset on the tenTB.spinners pool. Please let me know if you see any mistakes in my terminal commands.

Update: I saw your post on the old/original thread @PhiloEpisteme. Specifically " It is looking for a key /data/geli/0c611a6c-00b0-4aff-a50e-7185382754ad.key but no such key exists. The problem is it is there...
Code:
root@freenas[/data/geli]# ls -al
total 5
drwxrwxrwx  2 root  www   4 Feb 14 14:47 .
drwxr-xr-x  8 www   www  14 Feb 14 14:32 ..
-rw-r--r--  1 root  www  64 Jan 29 01:04 0c611a6c-00b0-4aff-a50e-7185382754ad.key
-rw-rw-rw-  1 root  www  64 Jan 22 21:34 f172b9d1-f862-4a6c-84fc-686822681c44.key
 
Last edited:

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
A little more information: I took the simple 3-drive raid-z (encrypted with pasword) fourTBs pool that were created in the new FreeNAS box and added them to the old FreeNAS.

Here is what I see:
Code:
root@freenas[/dev/gptid]# ls -al /dev/da*
crw-r-----  1 root  operator  0xa8 Feb 14 14:30 /dev/da0
crw-r-----  1 root  operator  0xa9 Feb 14 14:30 /dev/da1
crw-r-----  1 root  operator  0xb2 Feb 14 14:30 /dev/da10
crw-r-----  1 root  operator  0xb3 Feb 14 14:30 /dev/da11
crw-r-----  1 root  operator  0xbd Feb 14 14:30 /dev/da12
crw-r-----  1 root  operator  0xbf Feb 14 14:30 /dev/da12p1
crw-r-----  1 root  operator  0xc0 Feb 14 14:30 /dev/da12p2
crw-r-----  1 root  operator  0xbe Feb 14 14:30 /dev/da13
crw-r-----  1 root  operator  0xc1 Feb 14 14:30 /dev/da13p1
crw-r-----  1 root  operator  0xc2 Feb 14 14:30 /dev/da13p2
crw-r-----  1 root  operator  0xd7 Feb 14 17:05 /dev/da14   << fourTBs
crw-r-----  1 root  operator  0xd9 Feb 14 17:05 /dev/da14p1 << fourTBs
crw-r-----  1 root  operator  0xdb Feb 14 17:05 /dev/da14p2 << fourTBs
crw-r-----  1 root  operator  0xdd Feb 14 17:05 /dev/da15   << fourTBs
crw-r-----  1 root  operator  0xe1 Feb 14 17:05 /dev/da15p1 << fourTBs
crw-r-----  1 root  operator  0xe3 Feb 14 17:05 /dev/da15p2 << fourTBs
crw-r-----  1 root  operator  0xf2 Feb 14 17:05 /dev/da16   << fourTBs
crw-r-----  1 root  operator  0xf4 Feb 14 17:05 /dev/da16p1 << fourTBs
crw-r-----  1 root  operator  0xf7 Feb 14 17:05 /dev/da16p2 << fourTBs
crw-r-----  1 root  operator  0xaa Feb 14 14:30 /dev/da2
crw-r-----  1 root  operator  0xab Feb 14 14:30 /dev/da3
crw-r-----  1 root  operator  0xac Feb 14 14:30 /dev/da4
crw-r-----  1 root  operator  0xad Feb 14 14:30 /dev/da5
crw-r-----  1 root  operator  0xae Feb 14 14:30 /dev/da6
crw-r-----  1 root  operator  0xaf Feb 14 14:30 /dev/da7
crw-r-----  1 root  operator  0xb0 Feb 14 14:30 /dev/da8
crw-r-----  1 root  operator  0xb1 Feb 14 14:30 /dev/da9

Those three fourTBs drives do have the *.p1 & *.p2 partitions.

And sure enough if I try to import there is no problem... I can select the disks:

2020.02.14.at.17.10.01.ScreenShot.from.RYZEN-2700X.png

The problem is my data is on /dev/da0 through /dev/da11... & for those I don't see any *.p1 or *.p2 partitions making me think the GPT partions are gone :(

Is there any command you are aware of in FreeNAS that can restore those partitions without destroying the data?

-Dave
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,450
Hi theyost,

I just cought up partially on your post, but I should mention as @PhiloEpisteme suggested not to rush. Take it slow.
What I can tel about your *.p1 & *.p2 partitions is that one of the partition is reserved for the disk cache while the other is reserved for your ZFS data.
This being understood, let's talk about what you have and think about what you think you have lost.

First of all, having encrypted drives means that accessing the details about the content and structure of a drive or pool when the encryption isn't available is normal.
Unless you are successful at decrypting the pool, nothing of the pool or individual disk will be shown to you.
So not seeing any P1 or P2 partition is a normal thing to have, unless you have successfully loaded the encryption key and unlocked the pool.

So you SHOULDN'T be worried about the current state of affair. I am not saying you will be able to do much from this point on but as far as you have proceeded, this is not the problem at hands.

And as @PhiloEpisteme suggested, NEVER try to format or fix any of the disk by any other means than Freenas import procedure of encrypted pools.
By circumventing individual disk or the pool altogether outside of Freenas will most likely destroy whatever you are trying to achieve.

As long as the pool is encrypted and not messed up with, there is a change (not the proper terminology in that respect) where you should be able to recover your pool. However, I have some concerns about your previous manipulation related to pool migration ( too late in the day for me to get all the details) but converting a pool vdev configuration or adding a disk to an existing encrypted pool could be problematic if the rekey hasn't been done before the reboot.
I am not sure about this scenario as it is a cumbersone process to validate, but replacing a disk from en encrypted pool could be a problem upon reboot.

Freenas import could be improved to support a more robust appraoch to importing pool, whether they are encrypted or not.
I have struggled many times trying to import and encrypted pool by entering the passphrase and fail import simply because the pool was not using any passphrase. Vice-Versa.
Also trying to import a pool which is not encrypted and providing the Geli key would result in failed import.

My point is that the import of pool process is not smart in any way and will not give you the necessary information to move forward.
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
I am not sure if good/bad or if helps but I found this thread which seems to be a very similar problem:

Can't import disk or volume .... fsck_ufs /dev/da0 Cannot find file system superblock

I ran a few more commands:
Code:
root@freenas[~]# fsck_ufs /dev/da0
** /dev/da0
Cannot find file system superblock

Code:
root@freenas[~]# fsdb /dev/da0
** /dev/da0
Cannot find file system superblock
fsdb: cannot set up file system `/dev/da0'

Code:
root@freenas[~]# debugfs
debugfs 1.45.5 (07-Jan-2020)
debugfs:  open /dev/da0
debugfs: Bad magic number in super-block while trying to open /dev/da0
debugfs:  quit

Code:
root@freenas[~]# file -s /dev/da0
/dev/da0: DOS/MBR boot sector; partition 1 : ID=0xee, starHS (0x3ff,255,63), startsector 1, 4294967295 sectors
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,450
I don't think it applies to you as it suggest UFS. as UFS (ufs, I don't know if there is an official naming convention) isn't something supported with Freenas 10 and beyond.
UFS isn't the same as ZFS.
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
Hi theyost,

I just cought up partially on your post, but I should mention as @PhiloEpisteme suggested not to rush. Take it slow.
What I can tel about your *.p1 & *.p2 partitions is that one of the partition is reserved for the disk cache while the other is reserved for your ZFS data.
This being understood, let's talk about what you have and think about what you think you have lost.

First of all, having encrypted drives means that accessing the details about the content and structure of a drive or pool when the encryption isn't available is normal.
Unless you are successful at decrypting the pool, nothing of the pool or individual disk will be shown to you.
So not seeing any P1 or P2 partition is a normal thing to have, unless you have successfully loaded the encryption key and unlocked the pool.

So you SHOULDN'T be worried about the current state of affair. I am not saying you will be able to do much from this point on but as far as you have proceeded, this is not the problem at hands.

And as @PhiloEpisteme suggested, NEVER try to format or fix any of the disk by any other means than Freenas import procedure of encrypted pools.
By circumventing individual disk or the pool altogether outside of Freenas will most likely destroy whatever you are trying to achieve.

As long as the pool is encrypted and not messed up with, there is a change (not the proper terminology in that respect) where you should be able to recover your pool. However, I have some concerns about your previous manipulation related to pool migration ( too late in the day for me to get all the details) but converting a pool vdev configuration or adding a disk to an existing encrypted pool could be problematic if the rekey hasn't been done before the reboot.
I am not sure about this scenario as it is a cumbersone process to validate, but replacing a disk from en encrypted pool could be a problem upon reboot.

Freenas import could be improved to support a more robust appraoch to importing pool, whether they are encrypted or not.
I have struggled many times trying to import and encrypted pool by entering the passphrase and fail import simply because the pool was not using any passphrase. Vice-Versa.
Also trying to import a pool which is not encrypted and providing the Geli key would result in failed import.

My point is that the import of pool process is not smart in any way and will not give you the necessary information to move forward.
Thanks for your feedback @Apollo. I extended the tenTB.spinners pool a couple weeks ago (& saved the new keys) so I don't think that is the issue. This is also confirmed by the fact that my simple eightTB.spinners pool was also affected. I am on vacation this week but did purchase an identical 8TB WD drive an plan on eventually doing a dd copy. It appears it is possible to decrypt the drives individually... & so I can test on this clone without touching the original disks.

Right now I need to do some add'l research/testing... but if anybody is reading this thread and has any ideas on what might be going on and steps to import my eightTB.spinners pool to the new/fresh FreeNAS install please let me know.
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
I re-read the rules and it looks like bumps are permitted after a couple of days... & I am still stuck.

Full details in that original post but here is a quick summary:
  • After reboot I was unable to unlock any of my pools that were encrypted with a password (two of them)
  • Upon further examination I noticed I did not see the *.p1 & *.p2 partitions the drives that made up those pools like I did on my boot (zfs mirror pool)
I have tried several suggestions but am still at a roadblock. Any suggestions would be appreciated.

-Dave
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,450
If all fails I would try the following:

1) shutdown system and place the original boot disk in a safe location.
2) remove all your drives but keep them together if they are expected to be from the same pool.
3) Add a new boot disk and do fresh install of Freenas. The version should ideally be the one matching what you had before things got messed up.
Make sure Freenas boots and you have access to the GUI. No need to have the other disks plugged in yet.
If you have GUI access, then shut system down and proceed on the next step

4) Install only the disks for one of your encrypted pool. Start Freenas and access GUI.
5) Import the encrypted pool and try with and without passphrase and see how it goes. If still unsuccessful, try importing the pool without using encryption. If you have the recovery key, try it in the last resort.
If this doesn't work, turn Freenas off and remove the disk belonging to the pool and start over from step 4) with the other pool.

If none of the above works then either you have changed the encryption key on the disk and it no longer matched your previous backedup GELI keys or your passphrase is incorrect.

Beyond that, I don't know what can be done.
 
Last edited:
Joined
Oct 18, 2018
Messages
969
I think @Apollo's advice is excellent

5) Import the encrypted pool and try with and without passphrase and see how it goes. If still unsuccessful, try importing the pool without using encryption. If you have the recovery key, try it in the last resort.
If this doesn't work, turn Freenas off and remove the disk belonging to the pool and start over from step 4) with the other pool.
I will add that you should be sure NOT to supply the passphrase when you attempt to use the recovery key. The recovery key never has a passphrase set.
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
I have made progress but hit another wall and am hoping someone out there can help.

First the good news: my eightTB.spinners poolis unlocked. Here are the steps I used to unlock it:

Step 1) I purchased three used (ebay) 8TB drives with same model number as old/locked drives that made eightTB.spinners

Step 2) Before doing anything else I used these three new drives to create an empty pool with the same configuration as eightTB.spinners:
  • three-drive raidz
  • encrypted
  • password protected

Step 3) I then copied the GPT partition from each drive in this new pool to a *.backup file using the dd command:

Code:
dd if=/dev/da0 of=/media/da0.new.mbr.dd1536.backup bs=512 count=3
dd if=/dev/da1 of=/media/da1.new.mbr.dd1536.backup bs=512 count=3
dd if=/dev/da2 of=/media/da2.new.mbr.dd1536.backup bs=512 count=3

Note: According to this website the size of the GPT partition should be (128*N)+1024 where N is the number of partitions
In FreeNAS data drives are created with two partitions so (128*2)+1024=1280. My dd commands grab 512 * 3 = 1536; so a little more than required

If you opened these files it showed several lines of odd characters around quite a few <NULL> characters.

Step 4) Though I later learned not necessary, I did use dd to make a raw copy of the three eightTB.spinners drives. This way I could mess withe the copied drives without risking losing the original data.

Code:
dd if=/dev/da0 of=/dev/da4 bs=8192K status=progress
dd if=/dev/da1 of=/dev/da5 bs=8192K status=progress
dd if=/dev/da2 of=/dev/da6 bs=8192K status=progress

Note: This command will take hours to run so I recommend learning tmux so can copy three drives at same time.

Step 5) After backups complete I took a peek inside the GPT partition of the eightTB.spinners drives.

Code:
dd if=/dev/da0 of=/media/da0.old.mbr.dd1536.backup bs=512 count=3
dd if=/dev/da1 of=/media/da1.old.mbr.dd1536.backup bs=512 count=3
dd if=/dev/da2 of=/media/da2.old.mbr.dd1536.backup bs=512 count=3

If you opened any of these files you saw only <NULL> characters. This is different than the new pool created in step 3.

>> THE GPT TABLE/HEADER FOR MY DRIVES WAS GONE <<

Step 6) I copied the GPT table from the files created in step 3 to the eightTB.spinners drives:

Code:
sysctl kern.geom.debugflags=16
dd if=/media/da0.new.mbr.dd1536.backup of=/dev/da0 bs=512 count=3
dd if=/media/da1.new.mbr.dd1536.backup of=/dev/da1 bs=512 count=3
dd if=/media/da2.new.mbr.dd1536.backup of=/dev/da2 bs=512 count=3
sysctl kern.geom.debugflags=0


Step 7) After this was completed FreeNAS had no problem finding & importing the eightTB.spinners pool.

SUCCESS!!

My next step was to try the same thing with my tenTB.spinners
... a more complicated configuration with twelve 10TB drives
... 2x6 raid-z2 striped together
.. encrypted & with password

I tried to use the same gpt headers that worked with the eightTB.spinners pool but that did not work.
I tried to create headers in a VirtualBox instance of FreeNAS (12 drives) but that did not work.
When I say "did not work" I mean when I select Pool -> Add -> Import -> Existing it does not show any drives to import.
I am pretty sure the problem has to do with the following:

fdisk 4TB drive : cylinders=486401 heads=255 sectors/track=63 (16065 blks/cyl)
fdisk 8TB drive : cylinders=972801 heads=255 sectors/track=63 (16065 blks/cyl)
fdisk 10TB drive : cylinders=1215865 heads=255 sectors/track=63 (16065 blks/cyl)
fdisk 16GB VirtualBox : cylinders=33288 heads=16 sectors/track=63 (1008 blks/cyl)


FreeNAS can't find the appropriate partitions because the 8TB GPT headers will not work on 10TB drives

QUESTION: Does anybody know how I could create the appropriate GPT partition table for the twelve 10TB hard drives that make up the tenTB.spinners pool without actually having to purchase 12 hard drives??

Thanks for your help :)

-Dave
 

icyy

Cadet
Joined
Aug 27, 2020
Messages
4
Unless you are successful at decrypting the pool, nothing of the pool or individual disk will be shown to you.

I seriously doubt that unless linux and other OSes misdetect it to GPT. I working on a similar issue but don't want to open another topic about it since this is the closest discussion it gets to it and the @p accomplished some success.

For me it's a simpler story, it's just one single 4 TB drive encrypted with GELI (no password just the key, key is exported), I had a cable issue and the host log was full of dma events then when rebooted the drive is LOCKED. Since I could do nothing with it (clicking millions of times on the UNLOCK and failed to unlock) and I had the key backed up I decided I will detach it and try to readd it but from that point it didn't even show up in the drive list so I started digging and tried to do it manually through the command line after SCPing the key there.

Code:
geli attach -k drive.key /dev/da1
geli: Cannot read metadata from /dev/da1: Invalid argument.


At this point I thought the metadata got corrupted and looked into the manual of geli which states:

geli writes the metadata backup by default to the /var/backups/_prov_.eli
file. If the metadata is lost in any way (e.g., by accidental over-
write), it can be restored.

Yeah nothing there in FreeNAS not a big surprise the only eli files I could find in the system are such:

crw-r----- 1 root operator 0x70 Oct 18 10:42 /dev/gptid/<diskid>.eli

This I doubt is any kind of backup since it is a character device so no backup great... Next I put the drive into a docking bay and hooked it up to another Vmware box with the latest FreeNAS11. Here is the output of gpart list of 2 drives (one working and the broken one as a comparison):

Working Drive

Code:
Geom name: vtbd1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: vtbd1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd1p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: vtbd1
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Mode: r2w2e5


Corrupted drive 

Geom name: da1
modified: false
state: CORRUPT
fwheads: 255
fwsectors: 63
last: 7814035021
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da1p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da1
   Mediasize: 4000785948160 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0



One useful thing I found out is the:

sysctl -w kern.geom.part.check_integrity=0

Without this FreeNAS would not even detect p1 and p2 of the broken drive so I consider this already as a great success however as it shows the drive is corrupted, I still cannot use the geli key manually and it of course still does not show up in the frontend.

In your solution as I understood you suggested that you have bought the exact same model number of drives, you encrypted them with the exact same key as the old one was then you take out the mbr from the new ones and loaded them into the old LOCKED drives?

Is there anything else I can try? Why is the metadata not saved automatically in FreeNAS somewhere? I'm sure restoring that would help me.
Also wouldn't the point of GPT should be that it stores a backup of the partition table in the end of the device in case like this happens? I tried GPT restore of course it didn't help.
 

icyy

Cadet
Joined
Aug 27, 2020
Messages
4
I have made some progress in this subject so for those who care:

I did some tests on a working disk:
geli backup /dev/vtbd1p2 vtbd1p2.meta

This proves that these FreeNAS drives are using regular GPT and 2 partitions.

p1 is some sort of cache and nothing to do with geli
p2 is the geli encrypted data

So as long as you cannot get the GPT partitions work on a drive you are completely lost.
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
Hi icyy,

Okay... it has been a while since I recovered my files but hopefully I will be able to help.
Also... I am pretty good at computers but am not a full-time TrueNAS programmer
Also... be REALLY careful with commands like sysctl -w kern.geom.part.check_integrity=0. There are protections in place that the sysctl removes.
Also... you have only one drive an you mentioned you heard "clicking millions of times on the UNLOCK" which has me worried. Unless you are talking about the normal clicking you hear with all hard drives read/write data there shouldn't be any clicking. It is my understanding unlocking a drive in TrueNAS or using the geli command is done in software.

Assuming your drive is okay I would highly recommend you go someplace like eBay and get an identical hard drive to the one you are trying to recover. Then

Note: Below is a customized/summarized list for Icyy's problem. Others should go to this thread and search for the steps in "I have already recovered the eightTB.spinners pool with the following steps"​
  1. Use the eBay drive to create a GPT partition within TrueNAS.
    1. Probably use the same password/encryption setup
    2. ... (but honestly I think replicating password/encryption part is probably not necessary)
    3. ... (I think very likely you do need to have identical drive internals (hard drive) and identical pool structure (TrueNAS).
  2. Then use dd to copy the GPT Header from this eBay drive (first first few sectors of that new drive/pool) to somewhere on your boot disk, detachable USB, etc.
  3. Now the new eBay pool/drive can be discarded BUT
  4. Instead of discarding I recommend you play it safe and HIGHLY recommend you copy your existing/failed drive to the eBay drive using the dd command (this process will take many hours, maybe even a few days)
  5. Now put the old/broken drive in a safe corner of your office
  6. Use dd to copy the GPT header (first few sectors) from the boot disk, detachable USB, etc to the eBay drive (which is now a bit-for-bit copy of the failed drive)
  7. Try to use TrueNAS to import the drive/pool
  8. If successful I recommend you try to transfer the files off this 'hacked' drive to one that was built from scratch using normal methods.
Hopefully this helps and you are able to recover your files.
Please let me know if you have questions or need add'l help.

-Dave
 

theyost

Dabbler
Joined
Feb 24, 2019
Messages
30
So as long as you cannot get the GPT partitions work on a drive you are completely lost.

This is true... but if you have an extra identical drive you should be able to recreate your GPT partition. See steps above and the steps in this thread under "I have already recovered the eightTB.spinners pool with the following steps"

-Dave
 
Top