added hdd, tried to switch from stripe(1hdd) to mirror(2hdds) and lost my pool (exported)

egib

Cadet
Joined
Feb 20, 2024
Messages
5
I had a single 3tb disk in a single 'pool1' in my truenas scale deployment. i shut down, added a single 12tb drive. i booted up and tried expanding the pool, but i rebooted and now it seems like the pool is messed up. it's not visible, shows 2 unattached disks and the 3tb drive i had working now shows "Pool1 (Exported)" how screwed am i here? did i lose everything, or is there a way to recover ?

1708436860160.png


1708436828253.png


1708437207626.png


Code:
admin@TrueNAS1[~]$ sudo zpool status
  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:09 with 0 errors on Fri Feb 16 03:45:10 2024
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdc3    ONLINE       0     0     0
            sdd3    ONLINE       0     0     0
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Look at zpool import

Your pool may be there...

You probably selected the wrong option if you selected expand, but that shouldn't have exported your pool.
 

egib

Cadet
Joined
Feb 20, 2024
Messages
5
Code:
admin@TrueNAS1[~]$ sudo zpool import
   pool: Pool1
     id: 6482869455148595120
  state: UNAVAIL
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

        Pool1                                   UNAVAIL  insufficient replicas
          7c538179-a861-4b84-a1de-bd2e0504d986  UNAVAIL  invalid label
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
When a single-disk pool has data corruption, your going to find few ways out of that.

You may have some luck with other versions of ZFS (like running a copy of CORE or just a recent Ubuntu) and see if the pool can be imported by that.

If not and the data's really important to you, you can look into a recovery tool like Klennet (costs nothing to find out what it can recover, but expensive to recover files).

Or you could try things like zpool import -Fn Pool1 and see what it tells you it would do... maybe you can get the pool back with a small loss of the last few transactions worth of data.
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
I'm a bit confused as to what was done to extend, and two, what other process option would their be to select?

I thought the process to go from a ingle HDD in a Stripe array to a mirror was to insert the new drive to use, wait for it to show up in the disks list, Go to the pool status page then select the three dots menu on the pools single disk then select extend from that menu. Select the new disk from the popups dropdown and select extend.

Other than some versions of Truenas may be a bit different due to improvements in the GUI layout the process shouldn't really differ.
 

egib

Cadet
Joined
Feb 20, 2024
Messages
5
I'm a bit confused as to what was done to extend, and two, what other process option would their be to select?

I thought the process to go from a ingle HDD in a Stripe array to a mirror was to insert the new drive to use, wait for it to show up in the disks list, Go to the pool status page then select the three dots menu on the pools single disk then select extend from that menu. Select the new disk from the popups dropdown and select extend.

Other than some versions of Truenas may be a bit different due to improvements in the GUI layout the process shouldn't really differ.
i think i might have clicked expand instead of extend.....
 

egib

Cadet
Joined
Feb 20, 2024
Messages
5
When a single-disk pool has data corruption, your going to find few ways out of that.

You may have some luck with other versions of ZFS (like running a copy of CORE or just a recent Ubuntu) and see if the pool can be imported by that.

If not and the data's really important to you, you can look into a recovery tool like Klennet (costs nothing to find out what it can recover, but expensive to recover files).

Or you could try things like zpool import -Fn Pool1 and see what it tells you it would do... maybe you can get the pool back with a small loss of the last few transactions worth of data.

i tried the import command and nothing displays for output, just takes me back to the normal screen. i disconnected the new drive and will try to see what i can do with single drive and maybe bring it back. otherwise i'll just rebuild it this weekend :(
 

egib

Cadet
Joined
Feb 20, 2024
Messages
5
Hey @egib

Can you run these two commands separately and paste the output in codeblocks?

sudo zdb -l /dev/sdc

sudo sfdisk -d /dev/sdc

Code:
admin@TrueNAS1[~]$ sudo zdb -l /dev/sdc
[sudo] password for admin:
failed to unpack label 0
failed to unpack label 1
------------------------------------
LABEL 2 (Bad label cksum)
------------------------------------
    version: 5000
    name: 'Pool1'
    state: 0
    txg: 276482
    pool_guid: 6482869455148595120
    errata: 0
    hostid: 352295267
    hostname: 'TrueNAS1'
    top_guid: 16856214861961264722
    guid: 16856214861961264722
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 16856214861961264722
        path: '/dev/disk/by-partuuid/7c538179-a861-4b84-a1de-bd2e0504d986'
        whole_disk: 0
        metaslab_array: 65
        metaslab_shift: 34
        ashift: 12
        asize: 3000585682944
        is_log: 0
        DTL: 10166
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 2
failed to unpack label 3


Code:
admin@TrueNAS1[~]$ sudo sfdisk -d /dev/sdc
label: gpt
label-id: 9BC88052-C815-4907-916B-19128E04E25D
device: /dev/sdc
unit: sectors
first-lba: 34
last-lba: 5860533134
sector-size: 512

/dev/sdc1 : start=        2048, size=  5860531087, type=6A898CC3-1DD2-11B2-99A6-080020736631, uuid=7C538179-A861-4B84-A1DE-BD2E0504D986



also here's some more info...
Code:
admin@TrueNAS1[~]$ sudo fdisk -l
[sudo] password for admin:
Disk /dev/sdb: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F33E1379-2B8D-4BA0-9B89-159959EDA2FF

Device        Start       End   Sectors   Size Type
/dev/sdb1      4096      6143      2048     1M BIOS boot
/dev/sdb2      6144   1054719   1048576   512M EFI System
/dev/sdb3  34609152 468862094 434252943 207.1G Solaris /usr & Apple ZFS
/dev/sdb4   1054720  34609151  33554432    16G Linux swap

Partition table entries are not in disk order.


Disk /dev/sdc: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9BC88052-C815-4907-916B-19128E04E25D

Device     Start        End    Sectors  Size Type
/dev/sdc1   2048 5860533134 5860531087  2.7T Solaris /usr & Apple ZFS
 
Last edited:
Top