Need to rebuild a pool without recreating every detail

Status
Not open for further replies.

bcorbino

Cadet
Joined
Sep 27, 2014
Messages
3
I've looked, and the answer for the problem I created for myself is that I have to move all my data, blow away the pool, and start over. Having accepted that, is there a way to do so that keeps all my volumes, etc. intact?

Here's what I've done to myself: Started with 3x 3TB WD Red in RaidZ1. It's a single-user box, so I figured the performance would be OK. Then I made my mistake: I tried to add a fourth drive, and the GUI didn't tell me it wasn't doing like the old HP/Compaq Smart Array and adding it to the stripe. No, it added it as a single drive and striped it to the array.

The config looks like this:
Code:
  pool: volume00
state: ONLINE
  scan: scrub repaired 0 in 1h11m with 0 errors on Sun Sep 14 01:11:45 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        volume00                                        ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/856f1f61-2fc1-11e3-b7b1-00304867799a  ONLINE       0     0     0
            gptid/85e0135c-2fc1-11e3-b7b1-00304867799a  ONLINE       0     0     0
            gptid/86541baf-2fc1-11e3-b7b1-00304867799a  ONLINE       0     0     0
          gptid/bafa8b94-70cd-11e3-b086-00304867799a    ONLINE       0     0     0

errors: No known data errors


Yes, I know, I didn't RTFM. The question is since I've got shares and iSCSI and Jails set up on this thing, can I fix this without having to recreate the entire configuration, and how? I didn't see any way to take an entire pool and clone it, because I figured "ok, mirror to another drive, break mirror, fix problem, mirror back".

I'll probably reconfigure it as RAID10 since 6 TB should be enough space.

Thanks in advance.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I've looked, and the answer for the problem I created for myself is that I have to move all my data, blow away the pool, and start over. Having accepted that, is there a way to do so that keeps all my volumes, etc. intact?

Here's what I've done to myself: Started with 3x 3TB WD Red in RaidZ1. It's a single-user box, so I figured the performance would be OK. Then I made my mistake: I tried to add a fourth drive, and the GUI didn't tell me it wasn't doing like the old HP/Compaq Smart Array and adding it to the stripe. No, it added it as a single drive and striped it to the array.

The config looks like this:
Code:
  pool: volume00
state: ONLINE
  scan: scrub repaired 0 in 1h11m with 0 errors on Sun Sep 14 01:11:45 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        volume00                                        ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/856f1f61-2fc1-11e3-b7b1-00304867799a  ONLINE       0     0     0
            gptid/85e0135c-2fc1-11e3-b7b1-00304867799a  ONLINE       0     0     0
            gptid/86541baf-2fc1-11e3-b7b1-00304867799a  ONLINE       0     0     0
          gptid/bafa8b94-70cd-11e3-b086-00304867799a    ONLINE       0     0     0

errors: No known data errors


Yes, I know, I didn't RTFM. The question is since I've got shares and iSCSI and Jails set up on this thing, can I fix this without having to recreate the entire configuration, and how? I didn't see any way to take an entire pool and clone it, because I figured "ok, mirror to another drive, break mirror, fix problem, mirror back".

I'll probably reconfigure it as RAID10 since 6 TB should be enough space.

Thanks in advance.

The pool needs to be destroyed, no way around it.

You'd probably want to replicate the pool onto a second one.
 

bcorbino

Cadet
Joined
Sep 27, 2014
Messages
3
May I ask a favor? Can you just verify for me that this procedure will work before I go and buy a 3 TB external drive? I'm referencing this thread (http://forums.freenas.org/index.php?threads/zfs-send-to-external-backup-drive.17850/) for the gory details, but a few things I haven't found answers for anywhere.

First off, zfs list shows me using 2.63T in the pool, but a big chunk of that is a sparse iSCSI file extent that's only half full. Can I safely do a zfs receive into the 3 TB drive (best estimate is 2.7 T is the size it will show up as).
Second, when I destroy and recreate the pool on the internal drives, will all my mountpoints and jails and such be preserved, or am I recreating all of that?
Third, if what I've read in other threads here is to be believed, with 4 drives I'm much better off performance wise with RAID10 over RAIDZ2, and better off resiliency-wise with RAID10 over RAIDZ1.

From what I've gleaned elsewhere, I need to do this:
  1. Connect external drive, zfs create UPool.
  2. zfs snapshot volume00@snap
  3. zfs send volume00@snap | zfs receive UPool
  4. zpool destroy -f volume00
  5. zpool create volume00 mirror ada0 ada1 mirror ada2 ada3
  6. zfs send UPool | zfs receive volume00@snap
  7. zfs destroy UPool
Is that right, or am I losing everything if I do it that way?
 
Status
Not open for further replies.
Top