Pool Config Disconnected, but Data Remains any possible recovery?

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
Apologies for asking, but I couldn't find this particular situation only similar ones with key differences. I was working on fixing my FreeNAS server as something had corrupted on it disabling jails, plugins, and other functionality, possibly due to a HDD breaking, which I did manage to get its data saved from that one. But that taking several days of dealing with slow transfers to save it and my plan was to get things functional again and begin working on the backup server. And in my tired state I disconnected a Pool of a drive I was using trying to just disconnect a drive when setting up a new copy of FreeNAS.

The data is not deleted, but the pool config is gone. It was just a single drive in the pool though and only the config that is gone, which is what differs from anything I found searching. I tried zpool import -D, but perhaps I used it wrong as it found nothing. My usual recovery software is more Windows centric and doesn't do network drives, but I've considered buying some software for it, or trying to plug it into a Linux box to recover. I did try going back to the corrupted Freenas config what didn't have the deleted pool to see if I could recover it there, but no luck, but I do still have that as an option to try.

I've not done much yet for fear of making it worse, but I have a feeling I'm just out of luck.

Mobo: B450 Aorus M
CPU: 1600x (6 core version)
Ram: 16gb (32 but 16 given to Freenas)
HDD: 4tb Drive so it was either a WD Red or an HGST Deskstar (I'm fairly sure it was the Red)
Network: Desktop 82574L Intel Chipset NIC (motherboards built-in is realtek which I've seen in the past have issues with Freenas)


tl;dr: Tired, accidentally deleted Pool config, but not the data on a 1 drive pool
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Can you please share the output from zpool status -v and zpool import ?
 

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
Can you please share the output from zpool status -v and zpool import ?


Code:
  pool: Disk1
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 1 days 04:49:46 with 0 errors on Mon Mar  2 01:49:49                                                                                                                                                              2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        Disk1                                         ONLINE       0     0     0
          gptid/ad00c506-ad39-11e2-bd87-001fc628cdc8  ONLINE       0     0     0

errors: No known data errors

  pool: Disk2
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 0 days 00:00:21 with 0 errors on Sat Feb  9 21:00:22                                                                                                                                                              2019
config:

        NAME                                          STATE     READ WRITE CKSUM
        Disk2                                         ONLINE       0     0     0
          gptid/10a8ba10-1f9f-11e6-ad48-d05099a5425d  ONLINE       0     0     0

errors: No known data errors

  pool: Disk3
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support feat                                                                                                                                                             ure
        flags.
  scan: scrub repaired 0 in 0 days 03:03:37 with 0 errors on Sun Oct 14 00:03:39                                                                                                                                                              2018
config:

        NAME                                          STATE     READ WRITE CKSUM
        Disk3                                         ONLINE       0     0     0
          gptid/e0ef80ad-b520-11e3-8dc9-001fc628cdc8  ONLINE       0     0     0

errors: No known data errors

  pool: Disk4
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 1 days 05:55:55 with 0 errors on Mon Mar  2 02:55:56                                                                                                                                                              2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        Disk4                                         ONLINE       0     0     0
          gptid/f966ac0c-4ca9-11e6-b304-d05099a5425d  ONLINE       0     0     0

errors: No known data errors

  pool: Disk8T1
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 3 days 09:48:27 with 0 errors on Wed Mar  4 06:48:29                                                                                                                                                              2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        Disk8T1                                       ONLINE       0     0     0
          gptid/c6856d53-3ef9-11e9-a7bd-d05099a5425d  ONLINE       0     0     0

errors: No known data errors

  pool: Disk8T2
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 3 days 11:22:44 with 0 errors on Wed Mar  4 08:22:46                                                                                                                                                              2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        Disk8T2                                       ONLINE       0     0     0
          gptid/1c83d740-ab88-11e9-a665-d05099a5425d  ONLINE       0     0     0

errors: No known data errors

  pool: Movies
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 1 days 07:24:30 with 0 errors on Mon Mar  2 04:24:33                                                                                                                                                              2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        Movies                                        ONLINE       0     0     0
          gptid/334b98ed-1339-11e6-9c8c-d05099a5425d  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors


Code:
root@freenas[~]# zpool import
root@freenas[~]# zpool import -D Games
cannot import 'Games': no such pool available


Pool's name is old from when I did steam backups, but I think that drive had vacation photos of which my ex would have the other copy and some system/nand backups. It may be possible for me to comb TBs of history and other drives, but I would never know if I found everything or not. If it had happened just a little later there would have been a full mirror. Lesson learned, don't computer half asleep.

I know I didn't delete the data though. I saw the option and seeing it unchecked is part of why I proceeded to not worry about disconnecting. That and the operation was far to quick to take a few terabytes of data.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
It seems like you have a pool for each disk at the moment...

What comes out of:
gpart show ?
 

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
It seems like you have a pool for each disk at the moment...

What comes out of:
gpart show ?
Yes there is a reason it was setup like that. The backup I was working on was to be a totally different device for safety. When the server was first made I barely made over minimum wage so buying a HDD was a big deal, buying them in sets was an impossibility so it was my way to learn over time. Now that I have a better way to do it I was setting it up for a better method, but the server evolved into a serious thing in that time faster than I I could keep up with multiple users. Thats why I mentioned it being a single drive. I know its not ideal, but, this project was started when a single terabyte was a hefty price and main systems main drive was a Caviar pata drive.

Code:
root@freenas[~]# gpart show
=>         40  15628053088  vtbd0  GPT  (7.3T)
           40           88         - free -  (44K)
          128      4194304      1  freebsd-swap  (2.0G)
      4194432  15623858688      2  freebsd-zfs  (7.3T)
  15628053120            8         - free -  (4.0K)

=>        34  7814037101  vtbd1  GPT  (3.6T)
          34          94         - free -  (47K)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  7809842696      2  freebsd-zfs  (3.6T)
  7814037128           7         - free -  (3.5K)

=>        34  7814037101  vtbd2  GPT  (3.6T)
          34          94         - free -  (47K)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  7809842696      2  freebsd-zfs  (3.6T)
  7814037128           7         - free -  (3.5K)

=>        34  3907029101  vtbd3  GPT  (1.8T)
          34          94         - free -  (47K)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  3902834696      2  freebsd-zfs  (1.8T)
  3907029128           7         - free -  (3.5K)

=>        34  3907029101  vtbd4  GPT  (1.8T)
          34          94         - free -  (47K)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  3902834696      2  freebsd-zfs  (1.8T)
  3907029128           7         - free -  (3.5K)

=>        34  3907029101  vtbd5  GPT  (1.8T)
          34          94         - free -  (47K)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  3902834696      2  freebsd-zfs  (1.8T)
  3907029128           7         - free -  (3.5K)

=>        34  5860533101  vtbd6  GPT  (2.7T)
          34          94         - free -  (47K)
         128     4194304      1  freebsd-swap  (2.0G)
     4194432  5856338696      2  freebsd-zfs  (2.7T)
  5860533128           7         - free -  (3.5K)

=>         40  15628053088  vtbd7  GPT  (7.3T)
           40           88         - free -  (44K)
          128      4194304      1  freebsd-swap  (2.0G)
      4194432  15623858688      2  freebsd-zfs  (7.3T)
  15628053120            8         - free -  (4.0K)

=>         40  15628053088  vtbd8  GPT  (7.3T)
           40           88         - free -  (44K)
          128      4194304      1  freebsd-swap  (2.0G)
      4194432  15623858688      2  freebsd-zfs  (7.3T)
  15628053120            8         - free -  (4.0K)

=>    17  381646  cd0  MBR  (745M)
      17  381646       - free -  (745M)

=>    17  381646  iso9660/FREENAS  MBR  (745M)
      17  381646                   - free -  (745M)

=>      40  41942960  da0  GPT  (20G)
        40    532480    1  efi  (260M)
    532520  41385984    2  freebsd-zfs  (20G)
  41918504     24496       - free -  (12M)


I do appreciate the help though. I know the possibilities are probably poor.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
So we need to work out which disk identifier is behind the Games pool. This command will produce a quite long output...
gpart list
 

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
So we need to work out which disk identifier is behind the Games pool. This command will produce a quite long output...
gpart list

Sorry for the delay.

Code:
root@freenas[~]# gpart list
Geom name: vtbd0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 15628053127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: vtbd0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,c672a9c1-3ef9-11e9-a7bd-d05099a5425d,0x80,0x400000)
   rawuuid: c672a9c1-3ef9-11e9-a7bd-d05099a5425d
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd0p2
   Mediasize: 7999415648256 (7.3T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,c6856d53-3ef9-11e9-a7bd-d05099a5425d,0x400080,0x3a3412a00)
   rawuuid: c6856d53-3ef9-11e9-a7bd-d05099a5425d
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 7999415648256
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 15628053119
   start: 4194432
Consumers:
1. Name: vtbd0
   Mediasize: 8001563222016 (7.3T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: vtbd1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,f94939f3-4ca9-11e6-b304-d05099a5425d,0x80,0x400000)
   rawuuid: f94939f3-4ca9-11e6-b304-d05099a5425d
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd1p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,f966ac0c-4ca9-11e6-b304-d05099a5425d,0x400080,0x1d180be08)
   rawuuid: f966ac0c-4ca9-11e6-b304-d05099a5425d
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: vtbd1
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: vtbd2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,1bff03ab-13dd-11e5-901a-001fc628cdc8,0x80,0x400000)
   rawuuid: 1bff03ab-13dd-11e5-901a-001fc628cdc8
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd2p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,1c335256-13dd-11e5-901a-001fc628cdc8,0x400080,0x1d180be08)
   rawuuid: 1c335256-13dd-11e5-901a-001fc628cdc8
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: vtbd2
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd3
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: vtbd3p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,acea6863-ad39-11e2-bd87-001fc628cdc8,0x80,0x400000)
   rawuuid: acea6863-ad39-11e2-bd87-001fc628cdc8
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd3p2
   Mediasize: 1998251364352 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,ad00c506-ad39-11e2-bd87-001fc628cdc8,0x400080,0xe8a08808)
   rawuuid: ad00c506-ad39-11e2-bd87-001fc628cdc8
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251364352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 4194432
Consumers:
1. Name: vtbd3
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd4
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: vtbd4p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,109419df-1f9f-11e6-ad48-d05099a5425d,0x80,0x400000)
   rawuuid: 109419df-1f9f-11e6-ad48-d05099a5425d
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd4p2
   Mediasize: 1998251364352 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,10a8ba10-1f9f-11e6-ad48-d05099a5425d,0x400080,0xe8a08808)
   rawuuid: 10a8ba10-1f9f-11e6-ad48-d05099a5425d
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251364352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 4194432
Consumers:
1. Name: vtbd4
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd5
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: vtbd5p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,e0db2296-b520-11e3-8dc9-001fc628cdc8,0x80,0x400000)
   rawuuid: e0db2296-b520-11e3-8dc9-001fc628cdc8
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd5p2
   Mediasize: 1998251364352 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,e0ef80ad-b520-11e3-8dc9-001fc628cdc8,0x400080,0xe8a08808)
   rawuuid: e0ef80ad-b520-11e3-8dc9-001fc628cdc8
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251364352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 4194432
Consumers:
1. Name: vtbd5
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd6
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: vtbd6p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,332e546c-1339-11e6-9c8c-d05099a5425d,0x80,0x400000)
   rawuuid: 332e546c-1339-11e6-9c8c-d05099a5425d
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd6p2
   Mediasize: 2998445412352 (2.7T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,334b98ed-1339-11e6-9c8c-d05099a5425d,0x400080,0x15d10a308)
   rawuuid: 334b98ed-1339-11e6-9c8c-d05099a5425d
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2998445412352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 4194432
Consumers:
1. Name: vtbd6
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd7
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 15628053127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: vtbd7p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   efimedia: HD(1,GPT,1c702023-ab88-11e9-a665-d05099a5425d,0x80,0x400000)
   rawuuid: 1c702023-ab88-11e9-a665-d05099a5425d
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd7p2
   Mediasize: 7999415648256 (7.3T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,1c83d740-ab88-11e9-a665-d05099a5425d,0x400080,0x3a3412a00)
   rawuuid: 1c83d740-ab88-11e9-a665-d05099a5425d
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 7999415648256
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 15628053119
   start: 4194432
Consumers:
1. Name: vtbd7
   Mediasize: 8001563222016 (7.3T)
   Sectorsize: 512
   Mode: r2w2e5

Geom name: vtbd8
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 15628053127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: vtbd8p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r0w0e0
   efimedia: HD(1,GPT,c672a9c1-3ef9-11e9-a7bd-d05099a5425d,0x80,0x400000)
   rawuuid: c672a9c1-3ef9-11e9-a7bd-d05099a5425d
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: vtbd8p2
   Mediasize: 7999415648256 (7.3T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r0w0e0
   efimedia: HD(2,GPT,c6856d53-3ef9-11e9-a7bd-d05099a5425d,0x400080,0x3a3412a00)
   rawuuid: c6856d53-3ef9-11e9-a7bd-d05099a5425d
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 7999415648256
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 15628053119
   start: 4194432
Consumers:
1. Name: vtbd8
   Mediasize: 8001563222016 (7.3T)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 41942999
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 272629760 (260M)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,d1975627-6b20-11ea-827a-d16366f6bfef,0x28,0x82000)
   rawuuid: d1975627-6b20-11ea-827a-d16366f6bfef
   rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
   label: (null)
   length: 272629760
   offset: 20480
   type: efi
   index: 1
   end: 532519
   start: 40
2. Name: da0p2
   Mediasize: 21189623808 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(2,GPT,d19d8b7b-6b20-11ea-827a-d16366f6bfef,0x82028,0x2778000)
   rawuuid: d19d8b7b-6b20-11ea-827a-d16366f6bfef
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 21189623808
   offset: 272650240
   type: freebsd-zfs
   index: 2
   end: 41918503
   start: 532520
Consumers:
1. Name: da0
   Mediasize: 21474836480 (20G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
So Here's the mapping:
vtbd6p2 = Movies
vtbd8p2 = Disk8t1
vtbd7p2 = Disk8t2
vtbd1p2 = Disk4
vtbd5p2 = Disk3
vtbd4p2 = Disk2
vtbd3p2 = Disk1
vtbd2p2 = Games (inferred from not being one of the others)
The rawuuid for vtbd2p2 is 1c335256-13dd-11e5-901a-001fc628cdc8, so we can try to import the pool with it at the CLI

zpool import -d gptid/1c335256-13dd-11e5-901a-001fc628cdc8

You may need to try it this way if that doesn't fly

zpool import -d /dev/vtbd2p2

It will also be interesting to see smartctl -a /dev/vtbd2

it may also be vtbd0p2 with rawuuid of c6856d53-3ef9-11e9-a7bd-d05099a5425d, so transpose those values in if you think that's a better answer.
 
Last edited:

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
So Here's the mapping:
vtbd6p2 = Movies
vtbd8p2 = Disk8t1
vtbd7p2 = Disk8t2
vtbd1p2 = Disk4
vtbd5p2 = Disk3
vtbd4p2 = Disk2
vtbd3p2 = Disk1
vtbd2p2 = Games (inferred from not being one of the others)
The rawuuid for vtbd2p2 is 1c335256-13dd-11e5-901a-001fc628cdc8, so we can try to import the pool with it at the CLI

zpool import -d gptid/1c335256-13dd-11e5-901a-001fc628cdc8

You may need to try it this way if that doesn't fly

zpool import -d /dev/vtbd2p2

It will also be interesting to see smartctl -a /dev/vtbd2

it may also be vtbd0p2 with rawuuid of c6856d53-3ef9-11e9-a7bd-d05099a5425d, so transpose those values in if you think that's a better answer.

I think I'm doing something wrong its giving a message about it needing to be an absolute path. Apologies I'm not the swiftest at the moment, just flipped schedules at work. Its running in a VM at the moment with the drives passed through, I don't think that would interfere though as everything else is working. I could swap back to it running out of VM off a USB with my old config file if needed, though that config is a bit busted with plugins and things not working.

Code:
root@freenas[~]# root@freenas[~]# zpool import -d gptid/1c335256-13dd-11e5-901a-001fc628cdc8
cannot open 'gptid/1c335256-13dd-11e5-901a-001fc628cdc8': must be an absolute path
root@freenas[~]# zpool import -d /dev/vtbd2p2
cannot open '/dev/vtbd2p2/': Not a directory
root@freenas[~]# smartctl -a /dev/vtbd2

smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p6 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

/dev/vtbd2: Unable to detect device type
Please specify device type with the -d option.

Use smartctl -h to get a usage summary

root@freenas[~]# zpool import -d gptid/c6856d53-3ef9-11e9-a7bd-d05099a5425d
cannot open 'gptid/c6856d53-3ef9-11e9-a7bd-d05099a5425d': must be an absolute path
root@freenas[~]#
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
Maybe try this way... I'm not convinced, but the site where I saw the -d option uses it...
zpool import -d /dev/dsk/vtbd2p2

After we get through this, we need to talk about the wisdom of passing through individual disks in a virtualized environment... you should read this and re-consider what you're doing. https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/
No luck with that yet either. Still trying though. Really appreciate the help, I'm less familiar with BSD than Linux so I'm a bit out of my element.

Its not running out of Virtualbox or something like that. Its through proxmox and if I power it off and plug in a usb with Freenas on it with the same config it can run like nothing changed. I recently did this because while the current issue is of my own making, I've had enough issues with FreeNAS in recent times that I got tired of having to run across the place to restart it and rebooting from proxmox was a lot easier and I get the best features of both. I may have resolved FreeNAS crashing issues with a NIC it liked better, but it wore on me figuring that out when it gave no output or indication of that being the issue for months.

I can swap out the usb stick and try the commands that way just in case. Now that its the weekend I should have more time to do that.
 

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
Odd update to the situation. Fresh Freenas install, re-imported the pools, and the missing pool is there as an option. Its a huge relief. Sadly in a permissions nightmare now and the option I use to fix that with is gone in the new interface. I know its normally on the Windows end but it happens on Linux as well so I think something isn't being consistent in the sharing. It also happens with any of the drives so nothing wants to share.

Seeing the data still there when setting up the shares at least gives me hope. Despite the permission issues to actually get to the data its amazingly uplifting to just be able to see the folders.
 

mirkots

Dabbler
Joined
Jun 26, 2013
Messages
18
Resolved it.

In case someone in years to come stumbles by this with a similar issue I'll leave it here.
Fresh install of FreeNAS, imported pools (of which the old pool reappeared as the data was not gone), and to fix the sharing problem I went into Accounts -> Groups and added its groups (which it made a new one) into Wheel.

Why it needed that this time and no other time for permissions, I've no idea. My only guess on Wheel working is I remembered that it use to be the default when the server was originally made and on the new version it was making itself its own. But I'm not questioning a win too much and will be building a mirror server because I'm not doing this again ever.
 
Top