Single drive Pool Offline after using replace drive option

man-u-l

Cadet
Joined
Nov 8, 2023
Messages
8
I had a single 4Tb drive that was just starting to give me read errors so I got a new 5Tb drive for a Pool that uses only 1 drive (This is for Time Machine Mac backups so I dont need more than 1 drive)

I used the replace drive option from the dashboard, it took a while for truenas to resilver the pool, after the process was done I removed the old drive but the pool is now offline and I cannot import it, I tried a few things I read here but no luck, at this point I dont care about the data, all I will like to do is to add the new 5Tb disk to that existing pool and continue to make my TimeMachine backups, I know I can just create a new Pool but the existing one already has the permissions and storage limits for the 2 Macs I backup to.

Thanks in advanced!
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You likely removed the wrong drive.
Output of zpool status and camcontrol devlist please, using [CODE][/CODE] tags.
 

man-u-l

Cadet
Joined
Nov 8, 2023
Messages
8
Thanks, 100% sure the correct drive was removed as that was 4Tb and my other pool is configured with 6Tb drives and I have no problems with the other pool.

Code:
RESULT IN SYSTEM FAILURE.
root@TrueNas[~]# zpool status
  pool: NAS state: ONLINE
  scan: scrub repaired 0B in 03:44:24 with 0 errors on Sun Oct 22 03:44:24 2023
config:

        NAME                                            STATE     READ WRITE CKSUM
        NAS                                             ONLINE       0     0 0
          raidz2-0                                      ONLINE       0     0 0
            gptid/ac8f9b21-ae1f-11ed-ba90-ac1f6b61c610  ONLINE       0     0 0
            gptid/ac979464-ae1f-11ed-ba90-ac1f6b61c610  ONLINE       0     0 0
root@TrueNas[~]# zpool status  pool: NAS
 state: ONLINE  scan: scrub repaired 0B in 03:44:24 with 0 errors on Sun Oct 22 03:44:24 2023
config:

        NAME                                            STATE     READ WRITE CKS
UM
        NAS                                             ONLINE       0     0
 0
          raidz2-0                                      ONLINE       0     0
 0
            gptid/ac8f9b21-ae1f-11ed-ba90-ac1f6b61c610  ONLINE       0     0
 0
            gptid/ac979464-ae1f-11ed-ba90-ac1f6b61c610  ONLINE       0     0
 0
            gptid/ac9816cd-ae1f-11ed-ba90-ac1f6b61c610  ONLINE       0     0
 0
            gptid/ac92672d-ae1f-11ed-ba90-ac1f6b61c610  ONLINE       0     0
 0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:04 with 0 errors on Tue Nov  7 03:45:04 2023
config:

  pool: boot-pool
 state: ONLINE


Code:
root@TrueNas[~]# camcontrol devlist
<WDC WD6003FFBX-68MU3N0 83.00A83>  at scbus1 target 0 lun 0 (ada0,pass0)
<ST5000LM000-2U8170 0001>          at scbus2 target 0 lun 0 (ada1,pass1)
<WDC WD6003FFBX-68MU3N0 83.00A83>  at scbus3 target 0 lun 0 (ada2,pass2)
<WDC WD6003FFBX-68MU3N0 83.00A83>  at scbus4 target 0 lun 0 (ada3,pass3)
<WDC WD6003FFBX-68MU3N0 83.00A83>  at scbus5 target 0 lun 0 (ada4,pass4)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus6 target 0 lun 0 (ses0,pass5)
 

man-u-l

Cadet
Joined
Nov 8, 2023
Messages
8
zpool status cut out the last part:

Last login: Tue Nov 7 08:07:57 on pts/2
FreeBSD 13.1-RELEASE-p7 n245428-4dfb91682c1 TRUENAS

TrueNAS (c) 2009-2023, iXsystems, Inc.
All rights reserved.
TrueNAS code is released under the modified BSD license with some
files copyrighted by (c) iXsystems, Inc.
For more information, documentation, help or support, go here: http://truenas.com
Welcome to TrueNAS
Warning: the supported mechanisms for making configuration changesare the TrueNAS WebUI and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAYRESULT IN SYSTEM FAILURE.
root@TrueNas[~]# zpool status
pool: NAS state: ONLINE
scan: scrub repaired 0B in 03:44:24 with 0 errors on Sun Oct 22 03:44:24 2023
config:

NAME STATE READ WRITE CKSUM
NAS ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/ac8f9b21-ae1f-11ed-ba90-ac1f6b61c610 ONLINE 0 0 0
gptid/ac979464-ae1f-11ed-ba90-ac1f6b61c610 ONLINE 0 0 0
root@TrueNas[~]# zpool status pool: NAS
state: ONLINE scan: scrub repaired 0B in 03:44:24 with 0 errors on Sun Oct 22 03:44:24 2023
config:

NAME STATE READ WRITE CKS
UM
NAS ONLINE 0 0
0
raidz2-0 ONLINE 0 0
root@TrueNas[~]# zpool status
pool: NAS
state: ONLINE
scan: scrub repaired 0B in 03:44:24 with 0 errors on Sun Oct 22 03:44:24 2023
config:

NAME STATE READ WRITE CKS
UM
NAS ONLINE 0 0
0
raidz2-0 ONLINE 0 0
root@TrueNas[~]# zpool status
pool: NAS
state: ONLINE
scan: scrub repaired 0B in 03:44:24 with 0 errors on Sun Oct 22 03:44:24 2023
config:

NAME STATE READ WRITE CKS
UM
NAS ONLINE 0 0
0
raidz2-0 ONLINE 0 0
0
gptid/ac8f9b21-ae1f-11ed-ba90-ac1f6b61c610 ONLINE 0 0
0
gptid/ac979464-ae1f-11ed-ba90-ac1f6b61c610 ONLINE 0 0
0
gptid/ac9816cd-ae1f-11ed-ba90-ac1f6b61c610 ONLINE 0 0
0
gptid/ac92672d-ae1f-11ed-ba90-ac1f6b61c610 ONLINE 0 0
0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:04 with 0 errors on Tue Nov 7 03:45:04 2023
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvd0p2 ONLINE 0 0 0

errors: No known data errors
root@TrueNas[~]#
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I have no idea what the command output you're posting means, as it's a mess--commands are duplicated, nonsensical syntax, etc. But notably absent is anything that looks like a single-disk pool. So what's the output of zpool import?
 

man-u-l

Cadet
Joined
Nov 8, 2023
Messages
8
I have no idea what the command output you're posting means, as it's a mess--commands are duplicated, nonsensical syntax, etc. But notably absent is anything that looks like a single-disk pool. So what's the output of zpool import?
Code:
root@TrueNas[~]# zpool import
   pool: TimeMachine 4Tb
     id: 10777546277736715291
  state: UNAVAIL
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

        TimeMachine 4Tb  UNAVAIL  insufficient replicas
          ada1           UNAVAIL  invalid label
root@TrueNas[~]#
 

man-u-l

Cadet
Joined
Nov 8, 2023
Messages
8
Are you sure you have an offline pool? Everything looks good to me.
I know this is weird, here are a couple screen shots:
 

Attachments

  • Captura-de-pantalla 1.jpg
    Captura-de-pantalla 1.jpg
    78.9 KB · Views: 44
  • Captura-de-pantalla.jpg
    Captura-de-pantalla.jpg
    23.5 KB · Views: 35

man-u-l

Cadet
Joined
Nov 8, 2023
Messages
8
Like I said at the start I dont care much about the data, I will like to "Connect the new 5Tb drive to that existing pool so I dont loose the settings for that share and the storage settings"
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You can't do that unless you still have the old drive untouched and you reattach it to the system.
I would suggest destroying that pool after copying the shares and ACL configs.
 
Top