Removing striped cache failed, 1 drive left in pool

Status
Not open for further replies.

thirdgen89gta

Dabbler
Joined
May 5, 2014
Messages
32
Code:
freenas# zpool status
  pool: freenas
state: ONLINE
config:
 
    NAME                                            STATE    READ WRITE CKSUM
    freenas                                        ONLINE      0    0    0
      raidz1-0                                      ONLINE      0    0    0
        gptid/8b9452b9-d0ce-11e3-b4e4-448a5b619bf1  ONLINE      0    0    0
        gptid/8c7d1e6f-d0ce-11e3-b4e4-448a5b619bf1  ONLINE      0    0    0
        gptid/8d6c3ce0-d0ce-11e3-b4e4-448a5b619bf1  ONLINE      0    0    0
        gptid/8e52df74-d0ce-11e3-b4e4-448a5b619bf1  ONLINE      0    0    0
        gptid/8ecb1421-d0ce-11e3-b4e4-448a5b619bf1  ONLINE      0    0    0
      gptid/d0825aee-d4a0-11e3-be9c-448a5b619bf1    ONLINE      0    0    0
 
errors: No known data errors
freenas#


I was going to replace two older SSD's I had with some newer ones, but didn't have enough SATA ports available, so I tried to remove the cache.

It reported the removal successfully, but then I noticed that one drive was orphaned under the zpool.

Is there any way to remove this drive without backing up the entire drive and recreating the pool?

The device I wish to remove is gptid/d0825aee-d4a0-11e3-be9c-448a5b619bf1

If I try to remove it, it says:
cannot remove gptid/d0825aee-d4a0-11e3-be9c-448a5b619bf1: only inactive hot spares, cache, top-level, or log devices can be removed
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
You are in a precarious state now - you can't remove that drive. You have 5 disks in a RAIDz1 array, striped with a single drive. If that drive that you want to remove fails - you'll lose everything. I'd add another drive and create a mirror, so you have some redundancy.

To do so, you'll have to do it from the command line. Search the forum for the commands to use.
 

thirdgen89gta

Dabbler
Joined
May 5, 2014
Messages
32
Just going to back this up and blow the entire thing away. not even going to bother fixing it, if it simply can't be undone. The orphaned SSD drive itself only has about 4.71MB on it and I doubt its going to fail at all. There has to be more than 4.71MB of data on it, even if its parity data. How can you stripe dissimilar sized arrays anyways? I would think that would default to the smallest drive.

Why would the failed removal of a striped SSD cache cause FreeNAS to use the remaining orphaned SSD as anything?

The original layout was 5x 4TB drives in RaidZ1, and 2x 128GB SSDs in Raid0 as L2ARC.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Why would the failed removal of a striped SSD cache cause FreeNAS to use the remaining orphaned SSD as anything?

It doesn't. Someone has to have actively done that. Or you're the first person to ever have that bug. ;)

You stripe dissimilar sized arrays because they are split up by vdev. If you had a vdev of 5TB drives with one 100GB drive, they'd all act like 100GB drives. But across vdevs they can be dissimilar sizes.

A friend has six 2TB drives in one vdev and six 4TB drives in one vdev. Total capacity is 8TB from vdev1 and 16TB from vdev2. :)
 
Status
Not open for further replies.
Top