Tried to add log, accidentally added vdev

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
My setup - 11.2 U1, in an ESXi VM, with a Dell SAS 6gbps HBA in passthrough, and a NetApp DS4246 tray of disks via multipath SAS. All drives are 3TB HGST 512 byte, with 2gb for swap and the rest used as the vdev's you see below. I do have a full backup, but REALLY don't want to have to redo everything and restore..

My issue - Well, I think I really screwed up this evening. I wanted to compare performance with/ without an SSD log in my pool, and accidentally added them as a new mirrored vdev instead of adding them as a log device.

My pool before my screw-up:

Code:
root@omega[~]# zpool status nebula
  pool: nebula
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        nebula                                          ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/2fdc125f-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/30816c63-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/31403c46-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/31f1f182-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/32a4cfa3-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3356fcab-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/341311e6-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/34c9952c-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-4                                      ONLINE       0     0     0
            gptid/3580b3ad-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/364188ba-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-5                                      ONLINE       0     0     0
            gptid/370908e5-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/37cf00a5-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-6                                      ONLINE       0     0     0
            gptid/388e6fef-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3945fee7-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-7                                      ONLINE       0     0     0
            gptid/3a08fb45-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3ad7a643-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
           
errors: No known data errors
root@omega[~]#


Then, I did this:

Code:
root@omega[~]# gpart create -s gpt da33
da33 created
root@omega[~]# gpart create -s gpt da34
da34 created
root@omega[~]# gpart add -t freebsd-zfs -s 16G da33
da33p1 added
root@omega[~]# gpart add -t freebsd-zfs -s 16G da34
da34p1 added
root@omega[~]# zpool add nebula mirror da33p1 da34p1


You'll notice I forgot the "log" portion of that last command.. and now my pool looks like this (note the new "mirror-8"):

Code:
root@omega[~]# zpool status nebula
  pool: nebula
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        nebula                                          ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/2fdc125f-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/30816c63-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/31403c46-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/31f1f182-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/32a4cfa3-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3356fcab-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/341311e6-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/34c9952c-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-4                                      ONLINE       0     0     0
            gptid/3580b3ad-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/364188ba-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-5                                      ONLINE       0     0     0
            gptid/370908e5-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/37cf00a5-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-6                                      ONLINE       0     0     0
            gptid/388e6fef-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3945fee7-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-7                                      ONLINE       0     0     0
            gptid/3a08fb45-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3ad7a643-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-8                                      ONLINE       0     0     0
            da33p1                                      ONLINE       0     0     0
            da34p1                                      ONLINE       0     0     0

errors: No known data errors
root@omega[~]#


And finally, my questions for the experts -

1. Is there any way to remove "mirror-8" from my pool? I believe the answer is a resounding NO, but it can't hurt to ask.. maybe something has changed.

2. Those two disks in "mirror-8" are my power-loss-safe SSDs. I'd like to use them for their intended purpose. Can I replace them with two new HDDs? I imagine I'd go through a procedure similar to replacing a failed drive in a mirrored pair.

3. If I CAN replace da33p1 and da34p1 with new HDDs, can I also increase the size of the mirrored pair? The two da33p1 and da34p1 are just 16GB of SSD. I'd much rather have two 3TB drives in their place.

4. Pending the drive replacement, should I be concerned with referencing them by their gptid, rather than "da" names? There's a very good chance those da names will change, and I worry this could affect my pool.

I'd sure appreciate any tips/ walkthrough around how to go about replacing those two SSDs with two 3tb HDDs if anyone feels inclined to provide that level of detail.

Thanks!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I believe the answer is a resounding NO
There is supposed to be a way to remove a vdev (for this very reason) but it is something that is on the road-map for the near future. Right now, I think you are stuck with this.
Those two disks in "mirror-8" are my power-loss-safe SSDs. I'd like to use them for their intended purpose. Can I replace them with two new HDDs? I imagine I'd go through a procedure similar to replacing a failed drive in a mirrored pair.
Yes, you can replace them with regular hard drives in the same way you would swap out a failed drive.
If I CAN replace da33p1 and da34p1 with new HDDs, can I also increase the size of the mirrored pair? The two da33p1 and da34p1 are just 16GB of SSD. I'd much rather have two 3TB drives in their place.
Yes, this is still the same process, just replace one drive at a time and the pool will autoexpand when both drives have been replaced.
Pending the drive replacement, should I be concerned with referencing them by their gptid, rather than "da" names? There's a very good chance those da names will change, and I worry this could affect my pool.
You should be making these changes in the GUI and it will handle that. There is no reason to do it from the command line and even if you do, you should us gptid instead of da#.

https://forums.freenas.org/index.php?resources/replacing-a-failed-failing-disk.75/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is there any way to remove "mirror-8" from my pool?
In 11.2, with a pool consisting only of single disks or mirrors, yes, there is. Look up the zpool remove command for details.
 
Top