Best way to migrate to new server

Status
Not open for further replies.

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
So here's the situation I've had a FreeNAS system with 4x6TB raid z2 and 4x500GB SSD in mirror for while. But the case it was in was running out of space and was single power supply. So i've built a new system. I'm currently doing zfs replication from the old system to the new. Once my data is replicated i want to take the old system and move it off-site to use as backup, and migrate the new system to the primary production server. Whats the best way to do this?

Export the config from the old system, shut it down, import it to the new one, relocate the old server to the new location then reverse the direction of ZFS replication ?

Sorry if this is has been discussed before I've searched and haven't found anything feel free to point me in the right direction if I missed it. Also as a side note on the latest patch as of today i had to preface the remote zfs volume/data set with "/mnt/" for it to work even though the manual says its not necessary. If I didn't it would always fail saying the dataset does not exist.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I would build the new system from scratch and simply use ZFS replication to get the data on the new server.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Thanks for the reply cyberjock, thats what I'm doing now. Perhaps I should have been more clear on my question.

Right now I built a new system, its colocated in the same rack as the original system. I'm using ZFS Replication to copy data from the old system to the new. Seems like its going to take another 16 hours. Once that finishes, I want to take the current production system(the source of replication data) and move it off-site and use it as a backup. Can I just delete the replicaiton task on the old system and create an identical task on the new system to "reverse" the flow of replication?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thanks for the reply cyberjock, thats what I'm doing now. Perhaps I should have been more clear on my question.

Right now I built a new system, its colocated in the same rack as the original system. I'm using ZFS Replication to copy data from the old system to the new. Seems like its going to take another 16 hours. Once that finishes, I want to take the current production system(the source of replication data) and move it off-site and use it as a backup. Can I just delete the replicaiton task on the old system and create an identical task on the new system to "reverse" the flow of replication?

Sure can! It really is that easy. Just make sure you reverse the flow while the two nodes still share a common snapshot or you will have to re-replicate all of the data.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Thanks. Glad to hear it. As long as i've got the expert here, a couple of other things.

1. After getting everything set the way i want, can I change the periodic snapshots to something more reasonable. I currently have it set to hourly with a 6 hour duration because i wan to be able to cut over when necessary. But later i will probably want something like daily snaps with a few weeks of retention, or maybe somesort of tiered system.

2. Kinda irrelevent i guess, since i have it working but for the life of me I cant get replication to just work the first time i setup a task. For instance i have tankA/X(old system) replicating to tankb/X(new system). But now that thats done and i want to replicate tankA/Y to tankB/Y everytime i set it up as per the manual, it errors out saying that the dataset tankB/Y dosent exist. If i tinker around with manually creating tankB/Y toggling initialize etc. I can see in the network reporting that it eventually works including making sub datasets such as tankB/Y/sub1/sub2. A bit disconcerting and annoying but it seems to be working.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
What FreeNAS version? Some recent updates had a problem with replication.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
FreeNAS-9.3-STABLE-201509282017 on both ends
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
I find that if I ask it to replicate tankA/Y to tankB/ and to initialise first time it then creates tankB/Y, which I think is how it is supposed to work.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
I've got one more dataset to transfer I'll try as suggested and report back.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Can confirm on a brand new pool trying to replicate from server1/tank1/A to Server2/tank1 with recursive, initialize, and delete stale snapshots that I get, failed cannot open tank1/A dataset does not exist.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Can confirm on a brand new pool trying to replicate from server1/tank1/A to Server2/tank1 with recursive, initialize, and delete stale snapshots that I get, failed cannot open tank1/A dataset does not exist.
I can confirm your findings. This was working (again, having previously been broken in 9.3.1) one or two updates ago. I think we need to report a new bug.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
On the other hand, if I do the following it works. Maybe that is how it is supposed to work now, it's different from what it was:

For top level dataset A on PUSH pool p1, set up a snapshot task for p1/A

On PULL with pool p2, and a toplevel dataset backup already created and used for various things, create a dataset A, p2/backup/A

Set up a replication task on PUSH for p1/A, with the tickbox for initialise on first run ticked, destination p2/backup

It then starts replicating A snapshots on PUSH to p2/backup/A on PULL without further intervention.


I expect it would work using the destination p2/A, but I prefer not to put my replicated snapshots in the main pool on PULL, so I can't easily test this.

Previously the step of creating p2/backup/A was done by the replication task, as long as p2/backup already existed, but it now appears to be necessary to do it manually.


https://bugs.freenas.org/issues/11847 created as a documentation correction bug.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
This worked on my first test which was a zvol. I'll try again with a regular dataset.

One thing that is kinda odd though the sending system is reporting status as 'up to date' and no errors, but on the sending system the zvol shows size:250G used 342.1Gib and 1.99x compression. On the recieving server it shows size:250G used 98.9Gib and 1.99x compression ratio. The zvol is used to store vm disks exported over iscsi to xenserver 6.5. Does this seem right?
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
This worked on my first test which was a zvol. I'll try again with a regular dataset.

One thing that is kinda odd though the sending system is reporting status as 'up to date' and no errors, but on the sending system the zvol shows size:250G used 342.1Gib and 1.99x compression. On the recieving server it shows size:250G used 98.9Gib and 1.99x compression ratio. The zvol is used to store vm disks exported over iscsi to xenserver 6.5. Does this seem right?
Sorry I don't know about the zvol question - anyone any ideas? Perhaps something like a disk image done from the client would be more reliable - I have no idea.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
More of a curiosity than anything at this point. I'll bring my original system offline and then try to bring everying up on the new server and verify its all working before relegating the original system to backup duty. I'm guessing that maybe it has something to do with xen not reclaiming freed disk space, and so its just sitting there on the old server, but when I replicated there was no reason to transfer nothing? If it is in fact working it would be nice to understand whats going on under the hood.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Whats my best resource to understand replication better. I've obviously read the section in the FreeNas manual. But I need to understand potential failure modes, how it actually works, which errors matter and which don't.

I just don't want to have to come here on the forum for every little thing and bug the community. For instance yesterday during inital replication, I had a switch crap out, and FreeNas went red and shot out an email saying the replication failed. I replaced the switch and got busy with some other things and it looked like it eventually finished on its own.

Something to be concerned about? I have no idea.

Now today I've got a big data set 4.3TB and i had my periodic snapshots set to pretty high frequency because i eventually want to cut over from the old system to the new. But it keeps failing. Then i maybe had an ah ha moment when i realized that the snapshots were set to every hour with a 6 hour lifetime, but the inital replication for that data set takes like 16 hours.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Whats my best resource to understand replication better. I've obviously read the section in the FreeNas manual. But I need to understand potential failure modes, how it actually works, which errors matter and which don't.

I just don't want to have to come here on the forum for every little thing and bug the community. For instance yesterday during inital replication, I had a switch crap out, and FreeNas went red and shot out an email saying the replication failed. I replaced the switch and got busy with some other things and it looked like it eventually finished on its own.

Something to be concerned about? I have no idea.

Now today I've got a big data set 4.3TB and i had my periodic snapshots set to pretty high frequency because i eventually want to cut over from the old system to the new. But it keeps failing. Then i maybe had an ah ha moment when i realized that the snapshots were set to every hour with a 6 hour lifetime, but the inital replication for that data set takes like 16 hours.

Read on my recent posts. I have explained a few issues related to replications.
I am currently relocating moving dataset in the same volume, but at different dataset level to simpllyfy on replication. I have then run Freenas GUI automatic replication to a 8TB drive. So far so good. I do not expect any major drawback or issues.
Definitely, the replication process is not entirelly obvious, wether the destination need to carry the name of the dataset it originate from or if it will create the curent replication underneath the destination dataset.

When in doubts, you can list the snapshots from destination and source and compare them. Are they all there?
To best guarranty your replication to last, I would recommend you do a recursive manual snapshot of your pool. Manual shnapshots never expires and this will always give you a common update time reference.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Not sure if this belongs in a new thread or what, but its related to the above so here's the latest.

Got replication working after creating new periodic snapshots with a longer life time. All three top level data-sets recursively replicated with no indication of error.

However, not sure if this is just something odd, or a bug, or if it has any underlying impact beyond the visual difference, but now I have two servers both are on FreeNAS-9.3-STABLE-201509282017 and on the original system the data-sets in the volume tab looks like first screenshot with collapsible trees. Then after I did the set of three recursive replications to the new system, structure of the trees match but are not collapsible.

Also on the new system you can see two small data-sets that don't exist on the original(deathstar/resx, and deathstar/resx/trdcv).

Does this matter? Why did it happen? I've rebooted both systems as well and the condition persists.
 

Attachments

  • freenasoriginal.png
    freenasoriginal.png
    28.8 KB · Views: 251
  • freenasnew.png
    freenasnew.png
    27.9 KB · Views: 238

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
I wonder if you had any such dataset at some stage? Can you see any files you recognise in /mnt/tank1/deathstar/resx or directories below? Have you any snapshots with the same dataset name on sending machine? If not, it is really rather worrying!

Edit: if there are no snapshots of /trdcv on either machine then I have another theory and I would search the command history on the receiving machine for trdcv.
 
Last edited:

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Never had anything of any sort named resx or trdcv. I'll grep out both of them in the shell and maybe mount them to see whats inside. If anyone else has any theories let me know.
 
Status
Not open for further replies.
Top