SOLVED Convert RAIDZ-1 to RAIDZ-2

Status
Not open for further replies.

Chris Tobey

Contributor
Joined
Feb 11, 2014
Messages
114
Hello everyone,

I have a fairly simple question that may or may not be possible, but here is what I want to do:

Take a current 5 x 4TB RAIDZ-1 vdev and turn it into a 6 x 4TB RAIDZ-2.

In order to do this I believe I will need a total of 11 x 4TB drives (which I have).

Steps:
1. Have 5 x 4TB RAIDZ-1 vdev.
2. Acquire and install additional 6 x 4TB drives.
3. Add a RAIDZ-2 mirror to existing 5 x 4TB RAIDZ-1 vdev.
4. Wait for zpool to complete the mirroring.
5. Remove 5 x 4TB RAIDZ-1 vdev.
6. Now only a 6 x 4TB RAIDZ-2 is left.

So far I have completed steps 1 and 2, but am stuck on how to do numbers 3 and 5 correctly (if even possible?)

Current setup is:
Code:
# zpool status
  pool: SG1
state: ONLINE
  scan: scrub repaired 0 in 27h16m with 0 errors on Mon Nov 24 03:16:25 2014
config:

   NAME  STATE  READ WRITE CKSUM
   SG1  ONLINE  0  0  0
    raidz1-0  ONLINE  0  0  0
    gptid/87cd9f83-560a-11e3-a185-000c29e0733d  ONLINE  0  0  0
    gptid/884c73c9-560a-11e3-a185-000c29e0733d  ONLINE  0  0  0
    gptid/88ca162e-560a-11e3-a185-000c29e0733d  ONLINE  0  0  0
    gptid/8947f033-560a-11e3-a185-000c29e0733d  ONLINE  0  0  0
    gptid/dadf2c63-5866-11e3-b40f-000c29e0733d  ONLINE  0  0  0
   spares
    gptid/9dab55e6-6674-11e3-b84a-000c29e0733d  AVAIL

errors: No known data errors


Anyone know if this is possible, and if so, how to do it?

My other option would be to somehow manually do this, but I am hoping ZFS has this built in.

EDIT: The solution is to not use mirroring, but instead create a new pool and replicate the data. Full steps are in the posts below.
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You don't want to mirror the vdevs you want to create a new pool using a single vdev that is raidz2. Then cp or ZFS replicate the data to the new pool.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
As @SweetAndLow said, you need to create a new pool, rather than a new vdev for your existing pool. When multiple vdevs are present in a pool, data is always striped across them, never mirrored, and you can never remove a vdev from a pool once it's been added. You'll want to set up a new pool, replicate your data (which will preserve datasets, permissions, jails, etc.), detach the old pool, then rename the new pool to the old pool's name. Here's a recent post I wrote on the subject giving a bit more detail: https://forums.freenas.org/index.php?threads/mirror-to-raidz2.25908/#post-163322

I just ran through these steps in the last few days successfully. You can then optionally add your 5 x 4 TB disks back to the pool as a new RAIDZ2 vdev
 

Chris Tobey

Contributor
Joined
Feb 11, 2014
Messages
114
Thanks guys!

So to convert a RAIDZ-1 to RAIDZ-2:

Steps:
1. Have 5 x 4TB RAIDZ-1 pool "oldpool".
2. Acquire and install additional 6 x 4TB drives.
3. In the FreeNAS GUI, add a new RAIDZ-2 top level pool "newpool". (Storage>ZFS Volume Manager>Volume Name "newpool">Volume to Extend "-----", etc)
4. zfs send a full snapshot of oldpool to newpool.
5. Wait until complete.
6. zfs send another snapshot to pick up any recent changes if the NAS was live during this time.
7. Detach oldpool from FreeNAS GUI (Storage>Select oldpool>Detach Volume (stack of disks with red X at bottom)).
8. Remove 5 x 4TB RAIDZ-1 disks.
9. Detach newpool from the FreeNAS GUI.
10. From CLI, "zpool import newpool oldpool" (renames the new pool to the old pool's name).
11. From CLI, "zpool export oldpool".
12. From the FreeNAS GUI auto-import what should be the only pool "oldpool"
13. Now only a 6 x 4TB RAIDZ-2 is left with all the original data.

Does that look right?

Is there a good way to do step 4 (and 6) from the GUI, or is this better suited for the CLI? I have not done replication before. Other than possibly those steps, and Steps 10 and 11, I believe everything else is either from the FreeNAS GUI or physical.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
When I did the migration on my machine, I created the snapshots using the GUI, but used the CLI to kick off the replication tasks. You may want to look into tmux for the command line, so the replication won't be interrupted if your SSH connection is lost.

When you detach oldpool, you'll be asked if you want to delete the shares associated with that volume. You don't want to do that.

You'll be left with all your data in place on a shiny new 6-disk RAIDZ2 array. All your shares, jailes, etc. will be in place.
 

Chris Tobey

Contributor
Joined
Feb 11, 2014
Messages
114
Sounds good!

I just did:
~#zfs snapshot -r oldpool@backup
~#zfs send -Rv oldpool@backup | zfs receive -Fdu newpool

Seems to be working :)
 

Chris Tobey

Contributor
Joined
Feb 11, 2014
Messages
114
My 10.5TB took about 26 hours, which is ~120MB/s, not bad.

I didn't put the incremental commands in correctly the first time and saw the "cannot receive new filesystem stream: destination has snapshots" and "broken pipe" errors. Here are working commands:

First time:
~#zfs snapshot -r oldpool@backup
~#zfs send -Rv oldpool@backup | zfs receive -Fdu newpool

Incremental:
~#zfs snapshot -r oldpool@backup_incremental
~#zfs send -Rv -i backup oldpool@backup_incremental | zfs receive -Fdu newpool
 

Chris Tobey

Contributor
Joined
Feb 11, 2014
Messages
114
This works :) I did have to detach and auto-import the newpool after the replication task in order to see valid data though.
 
Status
Not open for further replies.
Top