SOLVED Wasn't originally concerned gptid disappeared when adding replacement drive, now that drive cannot be offlined.

Rob Granger

Dabbler
Joined
May 12, 2015
Messages
23
I wasn't going to worry, but then started to check and found out that I can't offline the drive via the gui, other drives can be offline-ed.
I had one of the IBM SSD start getting some smart errors on start up. it was warrantied and I offlineed the old drive, and put in the new drive, zeroed and did a replace. do I manually fail the drive and take it to another system, wiipe it and add it again? All the remove and replace was done though the GUI so, my understanding is that this should have worked.
Thanks
Rob


TrueNAS-13.0-U5.3
HP Z4 Workstation
281gb Ram
6 1.92 IBM Data Center SSD
Virtualized on Proxmox 7.4.17

...
And I hope my system info is still in my signature. and it's not. truly apologize, just had a call and I have to run. Will complete the post with more details when I return



Code:
root@freenas[~]# zpool status new-tank
  pool: new-tank
 state: ONLINE
  scan: resilvered 132K in 00:00:00 with 0 errors on Tue Jan 30 09:20:12 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    new-tank                                        ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/f206a5a7-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/f234d657-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/f239fc89-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        da3p2                                       ONLINE       0     0     0
        gptid/f2120d43-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/5ba49f11-9c4c-11ee-baa6-338af3b95ab7  ONLINE       0     0     0

errors: No known data errors


#note the drive does not show up in glabel status:

root@freenas[~]# glabel status
                                      Name  Status  Components
gptid/93dd5ddc-8455-11ee-936e-bd2b4da75ad7     N/A  da0p1
                           iso9660/TRUENAS     N/A  cd0
gptid/f239fc89-824d-11ee-a3a0-d05099c0ad1e     N/A  da4p2
gptid/5ba49f11-9c4c-11ee-baa6-338af3b95ab7     N/A  da1p2
gptid/f2120d43-824d-11ee-a3a0-d05099c0ad1e     N/A  da2p2
gptid/f206a5a7-824d-11ee-a3a0-d05099c0ad1e     N/A  da6p2
gptid/f234d657-824d-11ee-a3a0-d05099c0ad1e     N/A  da5p2


#The log from the GUI offline command for the disk I want to fix the gpid on.
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.507018+00:00 freenas.home zfsd 184 - - Creating new CaseFile:
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.507034+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838,)
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.507045+00:00 freenas.home zfsd 184 - -     Vdev State = OFFLINE
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.508449+00:00 freenas.home zfsd 184 - - GEOM: Notify  cdev=gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e subsystem=DEV timestamp=1706628802 type=CREATE
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.513965+00:00 freenas.home zfsd 184 - - Interrogating VDEV label for /dev/gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.566657+00:00 freenas.home zfsd 184 - - Onlined vdev(new-tank/10923598432274648838:/dev/gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e).  State now ONLINE.
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.566675+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838) closed - State ONLINE
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.568547+00:00 freenas.home zfsd 184 - - Creating new CaseFile:
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.568563+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838,)
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.568574+00:00 freenas.home zfsd 184 - -     Vdev State = ONLINE
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.570007+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838) closed - State ONLINE
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.571326+00:00 freenas.home zfsd 184 - - Creating new CaseFile:
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.571337+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838,)
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.571344+00:00 freenas.home zfsd 184 - -     Vdev State = ONLINE
Jan 30 09:33:22 freenas 1 2024-01-30T15:33:22.571351+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838) closed - State ONLINE

 
Last edited:

Rob Granger

Dabbler
Joined
May 12, 2015
Messages
23
Or do I just need to find a way to restore the label?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
gpart list da3, look for the rawuuid field of partition #2.

You should be able to do a zpool replace -f new-tank da3p2 gptid/<uuid from previous step>. Possibly need to zpool offline new-tank da3p2 first.
 

Rob Granger

Dabbler
Joined
May 12, 2015
Messages
23
Thanks Patrick, as with many things my bad juju seems to override what should work :)

Code:
root@freenas[~]# zpool offline new-tank da3p2
root@freenas[~]# zpool status new-tank
  pool: new-tank
 state: ONLINE
  scan: resilvered 276K in 00:00:00 with 0 errors on Tue Jan 30 14:42:32 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    new-tank                                        ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/f206a5a7-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/f234d657-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/f239fc89-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        da3p2                                       ONLINE       0     0     0
        gptid/f2120d43-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/5ba49f11-9c4c-11ee-baa6-338af3b95ab7  ONLINE       0     0     0


Crazy thing won't offline:

Code:
It does the same as though the GUI, goes off or doesn't and comes right back online.
zpool offline new-tank da3p2

Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.699140+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838,)
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.699152+00:00 freenas.home zfsd 184 - -     Vdev State = OFFLINE
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.700743+00:00 freenas.home zfsd 184 - - GEOM: Notify  cdev=gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e subsystem=DEV timestamp=1706647470 type=CREATE
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.705957+00:00 freenas.home zfsd 184 - - Interrogating VDEV label for /dev/gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.757684+00:00 freenas.home zfsd 184 - - Onlined vdev(new-tank/10923598432274648838:/dev/gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e).  State now ONLINE.
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.757704+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838) closed - State ONLINE
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.759652+00:00 freenas.home zfsd 184 - - Creating new CaseFile:
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.759668+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838,)
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.759679+00:00 freenas.home zfsd 184 - -     Vdev State = ONLINE
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.761379+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838) closed - State ONLINE
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.763073+00:00 freenas.home zfsd 184 - - Creating new CaseFile:
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.763087+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838,)
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.763098+00:00 freenas.home zfsd 184 - -     Vdev State = ONLINE
Jan 30 14:44:30 freenas 1 2024-01-30T20:44:30.763107+00:00 freenas.home zfsd 184 - - CaseFile(10581997627745921799,10923598432274648838) closed - State ONLINE


Code:
And just the rename:






zpool replace -f new-tank da3p2  gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e
cannot open 'gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e': no such device in /dev
must be a full path or shorthand device name

and
                           VVVV
zpool replace -f new-tank /dev/da3p2  gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e
cannot open 'gptid/f2311aac-824d-11ee-a3a0-d05099c0ad1e': no such device in /dev
must be a full path or shorthand device name

root@freenas[~]# ll /dev/da*
crw-r-----  1 root  operator  - 0x61 Jan 30 09:18 /dev/da0
crw-r-----  1 root  operator  - 0x62 Jan 30 09:18 /dev/da0p1
crw-r-----  1 root  operator  - 0x63 Jan 30 09:18 /dev/da0p2
crw-r-----  1 root  operator  - 0x68 Jan 30 09:18 /dev/da1
crw-r-----  1 root  operator  - 0x70 Jan 30 09:18 /dev/da1p1
crw-r-----  1 root  operator  - 0x71 Jan 30 09:18 /dev/da1p2
crw-r-----  1 root  operator  - 0x6a Jan 30 09:18 /dev/da2
crw-r-----  1 root  operator  - 0x72 Jan 30 09:18 /dev/da2p1
crw-r-----  1 root  operator  - 0x73 Jan 30 09:18 /dev/da2p2
crw-r-----  1 root  operator  - 0x6b Jan 30 09:18 /dev/da3
crw-r-----  1 root  operator  - 0x74 Jan 30 09:18 /dev/da3p1
crw-r-----  1 root  operator  - 0x75 Jan 30 09:18 /dev/da3p2 <<<<<<
crw-r-----  1 root  operator  - 0x67 Jan 30 09:18 /dev/da4
crw-r-----  1 root  operator  - 0x6e Jan 30 09:18 /dev/da4p1
crw-r-----  1 root  operator  - 0x6f Jan 30 09:18 /dev/da4p2
crw-r-----  1 root  operator  - 0x6d Jan 30 09:18 /dev/da5
crw-r-----  1 root  operator  - 0x78 Jan 30 09:18 /dev/da5p1
crw-r-----  1 root  operator  - 0x79 Jan 30 09:18 /dev/da5p2
crw-r-----  1 root  operator  - 0x6c Jan 30 09:18 /dev/da6
crw-r-----  1 root  operator  - 0x76 Jan 30 09:18 /dev/da6p1
crw-r-----  1 root  operator  - 0x77 Jan 30 09:18 /dev/da6p2

 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I have never seen an offlined device come back online on its own. What you could probably do is replace da3p2 with a sparse file, then check the partitioning of the disk and replace the sparse file with gptid/<UUID>. I assume you checked twice this is the correct UUID.

If you want to take that risk - your pool will still have redundancy but only a single physical disk - do this:

Create a fake disk drive on your pool, e.g. of 4T size: truncate -s 4T /mnt/new-tank/fakedisk
Replace your da3p2 with the fake disk: zpool replace da3p2 /mnt/new-tank/fakedisk

Check with swapinfo and gmirror status if partition #1 of your da3 is part of the swap. Disable swapping that particular mirror, e.g. swapoff swap1.eli. Then remove the virtual encryption device from da3p1: geli detach da3p1.

Then the disk is completely free of any active use and you could e.g. force a new partition table on it: gpart backup da2 | gpart restore -F da3. Then check the UUID with gpart list again and finally replace the fake disk with gptid/<UUID>.

Then reboot to kick the swap back into a consistent state.

HTH,
Patrick
 

Rob Granger

Dabbler
Joined
May 12, 2015
Messages
23
I will try that. in the mean time, I had a spare SSD! Guess what, I bought it from a retailer on ebay as a used data center ssd... and it is password protecteed... there is no getting around that even to format that disk.... formatted the disk I pulled, and am going to try shoving that back in.
When I built the pool, I did dual parity zfs, so I have a little redunancy built in.
 

Rob Granger

Dabbler
Joined
May 12, 2015
Messages
23
Well, that may have fixed it, looks like the drive that I pulled, formatting on windows, had freenas wipe and resliver is coming back with a gptid.
Thanks for the help Patrick. Now I have to see if I can return the locked ibm ssd... right after I find a wall to hit my head on.
Rob


Code:
zpool status new-tank
  pool: new-tank
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Tue Jan 30 17:52:34 2024
    1.63T scanned at 26.1G/s, 85.2G issued at 1.33G/s, 4.89T total
    13.3G resilvered, 1.70% done, 01:01:38 to go
config:


    NAME                                              STATE     READ WRITE CKSUM
    new-tank                                          DEGRADED     0     0     0
      raidz2-0                                        DEGRADED     0     0     0
        gptid/f206a5a7-824d-11ee-a3a0-d05099c0ad1e    ONLINE       0     0     0
        gptid/f234d657-824d-11ee-a3a0-d05099c0ad1e    ONLINE       0     0     0
        gptid/f239fc89-824d-11ee-a3a0-d05099c0ad1e    ONLINE       0     0     0
        replacing-3                                   DEGRADED     0     0     0
          da3p2                                       OFFLINE      0     0     0
          gptid/a33e9488-bfca-11ee-ac2a-118a77377394  ONLINE       0     0     0  (resilvering)
        gptid/f2120d43-824d-11ee-a3a0-d05099c0ad1e    ONLINE       0     0     0
        gptid/5ba49f11-9c4c-11ee-baa6-338af3b95ab7    ONLINE       0     0     0
 

Rob Granger

Dabbler
Joined
May 12, 2015
Messages
23
Code:
No known data errors
root@freenas[~]# zpool status new-tank
  pool: new-tank
 state: ONLINE
  scan: resilvered 824G in 00:45:38 with 0 errors on Tue Jan 30 18:38:12 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    new-tank                                        ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/f206a5a7-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/f234d657-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/f239fc89-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/a33e9488-bfca-11ee-ac2a-118a77377394  ONLINE       0     0     0
        gptid/f2120d43-824d-11ee-a3a0-d05099c0ad1e  ONLINE       0     0     0
        gptid/5ba49f11-9c4c-11ee-baa6-338af3b95ab7  ONLINE       0     0     0
 
Top