Full pool replacement

Status
Not open for further replies.

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Hi,

I've got a RaidZ1 (3 x 3TB) and after a few failures I'm feeling paranoid and want to add a new 3TB HDD and replace the entire pool with a RaidZ2 (4 x 3TB). I know you can't migrate these things so I've got all of my files backed up. Is there a way that I can backup my FreeNAS configuration and rebuild it on the new pool or will I have to start from scratch? It's currently installed on a thumb drive but my jails are on the HDDs.

I don't mind rebuilding everything, it's not that complicated.

Thanks,
Peter
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If you have space (or at least SATA connectors) for all 7 disks to be connected to your system at the same time, here's what I think is probably the best answer:
  1. Attach the new disks and build a new pool on them--I'll call it newpool for example's sake.
  2. Using ZFS replication from the command line, replicate everything on oldpool to newpool (e.g., https://forums.freenas.org/index.php?threads/need-semi-disaster-recovery-help.21129/#post-122362)
  3. From the web GUI, detach both pools, and remove the disks for oldpool
  4. From the command line, zpool import newpool oldpool (this will rename newpool to oldpool)
  5. From the command line, zpool export oldpool
  6. From the web GUI, auto-import, which will bring in your new pool under the old pool's name.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Thanks for the suggestion!

I should have mentioned that I only have 4 drives in total. 3 are currently in a raidz1 and I'm adding a 4th and implementing raidz2.
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Thanks for the suggestion!

I should have mentioned that I only have 4 drives in total. 3 are currently in a raidz1 and I'm adding a 4th and implementing raidz2.

In that case u cannot do what was suggested to you, but u have to backup everything to another place and start from scratch building your raidz2
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Thanks for the suggestion!

I should have mentioned that I only have 4 drives in total. 3 are currently in a raidz1 and I'm adding a 4th and implementing raidz2.
Guess I didn't understand you original posting then where you said you had a 3 x 3TB RAIDZ1 and were going to change it out for a 4 x 4TB RAIDZ2. Maybe you have a magic wand and can make the 3TB drives turn into 4TB drive ;)

Either way I'm sure you are off on the right track. Make sure to destroy your original pool and mark them as new so you will have no issue creating a new pool with all four drives installed.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Guess I didn't understand you original posting then where you said you had a 3 x 3TB RAIDZ1 and were going to change it out for a 4 x 4TB RAIDZ2. Maybe you have a magic wand and can make the 3TB drives turn into 4TB drive ;)
Whoops, my mistake! :) I'm re-using 3 of the original drives and adding 1 new drive to make my 4x3TB RaidZ2 array.

Either way I'm sure you are off on the right track. Make sure to destroy your original pool and mark them as new so you will have no issue creating a new pool with all four drives installed.
Yes, that's what I was planning on doing.

1) Delete old volume
2) Power off
3) Add new HDDs
4) Power on
5) Create new volume
6) Profit?
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Hmmm, it won't let me delete the old volume with FreeNAS. This seems to be a safety mechanism.
Code:
Dec  8 09:27:11 freenas manage.py: [middleware.exceptions:38] [MiddlewareError: Disk offline failed: "cannot offline gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2: no valid replicas, "]

So I guess I need to do it via command line?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's not the way to delete a pool. ;)

Click on the pool and choose detach, then "mark disks as new". ;)
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
That's not the way to delete a pool. ;)

Click on the pool and choose detach, then "mark disks as new". ;)
I tried that first and nothing happened. I'm just rebooting the server now to see if it needs a reboot to take effect.

Here's what happens when I try to delete the volume and mark disks as new.
Code:
Dec  8 09:39:58 freenas notifier: Stopping winbindd.
Dec  8 09:39:58 freenas winbindd[2274]:   STATUS=daemon 'winbindd' finished starting up and ready to serve connectionsGot sig[15] terminate (is_parent=1)
Dec  8 09:39:58 freenas winbindd[2281]:   STATUS=daemon 'winbindd' finished starting up and ready to serve connectionsGot sig[15] terminate (is_parent=0)
Dec  8 09:39:58 freenas winbindd[2278]:   STATUS=daemon 'winbindd' finished starting up and ready to serve connectionsGot sig[15] terminate (is_parent=0)
Dec  8 09:39:58 freenas winbindd[2282]:   STATUS=daemon 'winbindd' finished starting up and ready to serve connectionsGot sig[15] terminate (is_parent=0)
Dec  8 09:39:58 freenas notifier: Waiting for PIDS: 2274.
Dec  8 09:39:58 freenas notifier: Stopping smbd.
Dec  8 09:39:58 freenas notifier: Waiting for PIDS: 2271.
Dec  8 09:39:58 freenas notifier: Stopping nmbd.
Dec  8 09:39:58 freenas nmbd[2268]:   STATUS=daemon 'nmbd' finished starting up and ready to serve connectionsGot SIGTERM: going down...
Dec  8 09:39:58 freenas notifier: Waiting for PIDS: 2268.
Dec  8 09:39:58 freenas notifier: rpcbind not running?
Dec  8 09:39:58 freenas notifier: lockd not running?
Dec  8 09:39:58 freenas notifier: statd not running?
Dec  8 09:39:58 freenas notifier: mountd not running? (check /var/run/mountd.pid).
Dec  8 09:39:58 freenas notifier: nfsd not running?
Dec  8 09:39:59 freenas kernel: epair0a: link state changed to DOWN
Dec  8 09:39:59 freenas kernel: epair0b: link state changed to DOWN
Dec  8 09:40:04 freenas kernel: epair2a: link state changed to DOWN
Dec  8 09:40:04 freenas kernel: epair2b: link state changed to DOWN
Dec  8 09:40:07 freenas kernel: epair1a: link state changed to DOWN
Dec  8 09:40:07 freenas kernel: epair1b: link state changed to DOWN
Dec  8 09:40:07 freenas kernel: re0: link state changed to DOWN
Dec  8 09:40:07 freenas kernel: bridge0: link state changed to DOWN
Dec  8 09:40:07 freenas kernel: re0: promiscuous mode disabled
Dec  8 09:40:11 freenas dhclient: New IP Address (re0): 192.168.1.141
Dec  8 09:40:11 freenas kernel: re0: link state changed to UP
Dec  8 09:40:11 freenas dhclient: New Subnet Mask (re0): 255.255.255.0
Dec  8 09:40:11 freenas dhclient: New Broadcast Address (re0): 192.168.1.255
Dec  8 09:40:11 freenas dhclient: New Routers (re0): 192.168.1.1
Dec  8 09:40:30 freenas kernel: GEOM_ELI: Device ada0p1.eli destroyed.
Dec  8 09:40:30 freenas kernel: GEOM_ELI: Detached ada0p1.eli on last close.
Dec  8 09:40:31 freenas notifier: geli: No such device: /dev/ada0p1.
Dec  8 09:40:31 freenas kernel: GEOM_ELI: Device ada1p1.eli destroyed.
Dec  8 09:40:31 freenas kernel: GEOM_ELI: Detached ada1p1.eli on last close.
Dec  8 09:40:31 freenas notifier: geli: No such device: /dev/ada1p1.
Dec  8 09:40:31 freenas manage.py: [middleware.exceptions:38] [MiddlewareError: jail not found]
Dec  8 09:40:38 freenas kernel: ifa_del_loopback_route: deletion failed
Dec  8 09:40:38 freenas kernel: Freed UMA keg (udp_inpcb) was not empty (40 items).  Lost 4 pages of memory.
Dec  8 09:40:38 freenas kernel: Freed UMA keg (udpcb) was not empty (504 items).  Lost 3 pages of memory.
Dec  8 09:40:38 freenas kernel: Freed UMA keg (tcptw) was not empty (150 items).  Lost 3 pages of memory.
Dec  8 09:40:38 freenas kernel: Freed UMA keg (tcp_inpcb) was not empty (40 items).  Lost 4 pages of memory.
Dec  8 09:40:38 freenas kernel: Freed UMA keg (tcpcb) was not empty (16 items).  Lost 4 pages of memory.
Dec  8 09:40:38 freenas kernel: hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
Dec  8 09:40:38 freenas kernel: hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
Dec  8 09:40:39 freenas kernel: ifa_del_loopback_route: deletion failed
Dec  8 09:40:39 freenas kernel: Freed UMA keg (udp_inpcb) was not empty (40 items).  Lost 4 pages of memory.
Dec  8 09:40:39 freenas kernel: Freed UMA keg (udpcb) was not empty (504 items).  Lost 3 pages of memory.
Dec  8 09:40:39 freenas kernel: Freed UMA keg (tcptw) was not empty (50 items).  Lost 1 pages of memory.
Dec  8 09:40:39 freenas kernel: Freed UMA keg (tcp_inpcb) was not empty (40 items).  Lost 4 pages of memory.
Dec  8 09:40:39 freenas kernel: Freed UMA keg (tcpcb) was not empty (20 items).  Lost 5 pages of memory.
Dec  8 09:40:39 freenas kernel: hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
Dec  8 09:40:39 freenas kernel: hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required


But then the volume is still there.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, it doesn't offline disks in the pool as part of destroying the pool (which is what your error message is saying), so you're definitely doing it wrong. ;)
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Yeah, it doesn't offline disks in the pool as part of destroying the pool (which is what your error message is saying), so you're definitely doing it wrong. ;)
Hmm, all I'm doing is clicking on the volume, then the "Detach Volume" button, clicking "Mark the disks as new" and then clicking "Yes".

Maybe I need to pull the drives and format them in another machine.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Ah-ha! If I don't select "Mark the disks as new" then it allows me to detach properly. I'm going to do a quick wipe and then put the new drives in.
Code:
Dec  8 09:47:35 freenas manage.py: [middleware.exceptions:38] [MiddlewareError: Failed to wipe ada0p1: dd: /dev/ada0p1: Operation not permitted ]

Boo!

It's interesting to see all of the failsafes that are in FreeNAS. ;)

Hmmm, it still won't let me wipe after a reboot.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
I popped in new the drives and was able to create a new RAIDZ2 volume with my 4 drives. No wiping necessary.

Just copying everything back now.. ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Hmm.. I might have to test this and see if there's a bug. What version of FreeNAS and what hardware are you using? I'm going to see if there's a bug and I need to know what parameters might be important. ;)
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Hmm.. I might have to test this and see if there's a bug. What version of FreeNAS and what hardware are you using? I'm going to see if there's a bug and I need to know what parameters might be important. ;)
FreeNAS-9.2.1.8-RELEASE-x64 (e625626)
CPU: Intel(R) Core(TM) i3-3220T CPU @ 2.80GHz
RAM: 16053MB
HDD: 4x 3TB
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
seems to work for me in the simple test case.

Setup:
virtual test environment using 9.2.1.8

Repro:
1. create new pool "tank2", single disk strip
2. click detach button and check the mark as new box

Logs:
Code:
Dec  8 22:52:20 freenas-test kernel: GEOM_ELI: Device da5p1.eli destroyed.
Dec  8 22:52:20 freenas-test kernel: GEOM_ELI: Detached da5p1.eli on last close.
Dec  8 22:52:20 freenas-test notifier: geli: No such device: /dev/da5p1.
Dec  8 22:52:20 freenas-test notifier: swapoff: /dev/da5p1.eli: No such file or directory
Dec  8 22:52:20 freenas-test notifier: geli: No such device: /dev/da5p1.
Dec  8 22:52:20 freenas-test notifier: da5 destroyed
Dec  8 22:52:20 freenas-test notifier: da5 created
Dec  8 22:52:20 freenas-test notifier: da5 destroyed
Dec  8 22:52:23 freenas-test notifier: geli: Cannot access da1p1 (error=1).
Dec  8 22:52:24 freenas-test notifier: Stopping collectd.
Dec  8 22:52:25 freenas-test notifier: Waiting for PIDS: 5463.
Dec  8 22:52:25 freenas-test notifier: Starting collectd.
Dec  8 22:52:29 freenas-test notifier: geli: Cannot access da1p1 (error=1).
Dec  8 22:52:29 freenas-test notifier: Stopping collectd.
Dec  8 22:52:35 freenas-test notifier: Waiting for PIDS: 6197.
Dec  8 22:52:35 freenas-test notifier: Starting collectd.

EDIT: and one better i also tested with a pool that had the system dataset on it and the dataset actually gets copied over to a different pool and reconfigured. Now if there wasn't another pool there might be problems?
 
Status
Not open for further replies.
Top