Best way to migrate to new server

Status
Not open for further replies.

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Nothing inside of /mnt/tank1/deathstar/resx
resx is drwxr-xr-x 3 root wheel 3B Oct 13 00:00
resx/trdcv is drwxr-xr-x 2 root wheel 2B Oct 13 00:00 and empty

Grepped through .bash_history and found nothing, is there elsewhere I should look. I wonder if resx/trdcv is related to the tree issue or totally seperate.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I have migrated my dataset in a similar fashion through replication as you did and I am experiencing the same issues. I thought it was caused by the limit Freenas GUI has when it come to displaying tree structures.
I have also found I cannot list one of the folder under SMB and I can't even list its content entirely via ssh. However, when I do a Find it seems to list its content correctly, if I recall correctly. As long as your snapshot are present, everything is find.
I have also noticed this maybe related to the issue related to the "cannot mount ... File name too long" I have posted a few days ago which has not been noticed much.
If you do "zpool export tank1" then "zpool import tank1" do you get the above error message?
It is my guts feeling this is your problem.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Nothing inside of /mnt/tank1/deathstar/resx
resx is drwxr-xr-x 3 root wheel 3B Oct 13 00:00
resx/trdcv is drwxr-xr-x 2 root wheel 2B Oct 13 00:00 and empty

Grepped through .bash_history and found nothing, is there elsewhere I should look. I wonder if resx/trdcv is related to the tree issue or totally seperate.
I'd look through .history as well, as we seem to end up in csh more than bash. If there's no trace of snapshots I'd suggest doing a zfs destroy on tank1/deathstar/resx and just hope it never turns up again!
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Update: So things are both ok, and not ok. New server is now connected to production systems via iscsi, nfs, smb etc. and working fine as primary server. Performance is good maxing out gig. The only nagging things I still have going on with this migration is that even after an update and reboot the tree structrue in storage view wont collapse or show the little arrows.

Secondly, and more importanlty I'm now tring to reverse the flow of replication as discussed with cyberjock on page one. When seeting up the replication task on the newserver(push) with dataset tank1/slow/test and remote dataset as slow/test it seems to still have problems. I get the error Failed: cannot open 'slow/test/slow/test: dataset doesnt exist.

Which of course is true, but it seems that no matter what i put in the box for the remote volume/data set it prepends that value in front of the local(push) volume/dataset. So given that i tried entering just a "/" in remote volume/data set and even then i gives the error '//slow/test' does not exist. So it seems that leaving it blank would make it match the remote side, but you can't leave it blank in the interface.

I've gone through the testing steps in the manual, ssh connects fine, i've got 3 days left of overlapping snapshots from when I was replicating in the other direction.

Any insight would be greatly appreciated, I really don't want to do a from scratch replication since it took about 5 days last time. If theres any other information i can provide to be more clear or tests to do please let me know.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
I also noticed that there is now no 'initialize' option available, not that I would want to do that anyway.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Replication was updated 4 weeks ago. The initialize option was removed. And the "dataset doesn't exist" error is a known bug.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Do you have the bug number/is there a workaround/ or patch i can apply, im comfortable going into autorepl.py or whatever.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
The exact error message is "Failed: cannot open 'deathstar/xen/deathstar/xen': dataset does not exist. The data is on both push and pull, I was able to replicate from fileserver2 to fileserver1 originally. The error I'm geetting now is when i'm trying to replicate back in the reverse direction (fileserver1(push) to fileserver2(pull))

I believe there are screeen shots of both my original and and new sever datasets in post 18 on page 1. I'm currently trying to replicate filseever1(new) tank1/deathstar/xen to fileserver2 (old) deathstar/xen
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
I read through your links.I'm not sure this is really the same issue. If i run mount on either system, all dataset show as mounted. It almost seems like some sort of concatenation error in the script. No mater what i set in the gui for the source volume/dataset say "tank1/test" and then no matter what i put for remote volume/dataset "tank2/remote" it concatenates remote + source. So given the examples above it would result in tank2/remote/tank1/test does not exist. Which it dosent.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Yeah, I'm no expert in either the freenas codebase or python, but at first glance this seems like it might be problematic. Line 274 in autorepl.py

remotefs_final = "%s%s%s" % (remotefs, localfs.partition('/')[1],localfs.partition('/')[2])

I'll do some more digging to see if it was changed recently.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Confirmed, i made the following change, which is totally a hack , and real developer will have to fix it the right way, but given the change below if i just set the remote data set to what it really should be it works flawlessly.
line 275 in autorepl.py

# remotefs_final = "%s%s%s" % (remotefs, localfs.partition('/')[1],localfs.partition('/')[2])
# real men test in production.... lets do this
remotefs_final = "%s" % (remotefs)
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Needless to say, to anyone coming by and reading this that, you shouldn't be manually modifying freeNAS code in a production system unless you've tested in lab and are sure of what you're doing the zfssend command can totally F up the receving side if you have it set wrong.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
A quick question! Do you have any jails running on your old server, and if so, how did you migrate these to the new.

My production box is over 70% full and I was planning on adding an 8th drive (it's already in there, but as a cold spare). I use ZFS replication to copy all datasets (including my /mnt/pool/jails dataset) a backup box every night, so was hoping I could just create the new zpool on production replicate back as you'd described.

Not sure what happens to the jails though, as I'd like to avoid building them all again.
 

monovitae

Explorer
Joined
Jan 7, 2015
Messages
55
Couldn't really say I don't utilize the jails. Generally speaking after having had troubles with replication both from my old system and then in reverse from the new system back to the old system; I would say that if you have something thats currently working don't mess with it untill freeNAS 10. Replication as is is broken in lots of nuanced and strange ways, and I think I read somewhere that they were overhauling it for 10. As I mentioned in my above post, its probably not worth messing with unless you have to, and even then I would make sure that i test it in a virtual enviornment or lab. When I was trying to track down the error I describe above, I somehow, managed to destroy a pool on the recieving side in lab.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Ok, thanks for the warning!

I'm still on 9.3 (haven't updated to 9.3.1 due to replication issues and the LSI firmware) and the replication to my backup box works fine. Good idea about testing into a virtual environment though. I'll have a go at creating a 9.3 VM and replicating the jails dataset to see how it behaves.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Thanks! I had seen your thread and added a watched tag as it looked like something I could learn from!!

When you'd replicated the jails dataset back, was it simply a case of selecting the new jail root in the configuration tab and they all appeared in the jails list, and more importantly started up OK?

I could recreate them all again, but it would take some time and even though I've done most of them a few times now I'd be surprised if everything worked first time ;)

Would be great if I could just replicate the backup pool to the new production pool and point it to the jails dataset.
 
Status
Not open for further replies.
Top