Migration of datasets with rsync not working as expected

Status
Not open for further replies.

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
Hey all,
I recently migrated from an old volume of USB drives to a new volume I created of internal storage drives. My goal is to keep my system exactly the same, just move it to a new volume. I performed the following.

  1. Set up the new drives as "internal" mount point
  2. Stop all sharing services and jails
  3. Rsync with this command
    Code:
    /usr/local/bin/rsync -avzh --delete /mnt/origdisk/ /mnt/internal/
  4. Go to System-> System Dataset and set the the dataset pool to use the new "internal" point from "origdisk"
  5. Go to sharing, and change all paths from /mnt/origdisk to /mnt/internal
  6. Go to jails, and change all mount points from /mnt/origdisk to /mnt/internal
  7. Detach origdisk volume.
  8. Reboot to verify the system functions as expected
So far, all of my data is present, and working properly. However, the datasets I created in /mnt/origdisk didn't migrate to /mnt/internal. The files are exactly the same. I assumed since I rsynced everything from /mnt/origdisk to /mnt/internal, these settings would be copied. However they have not. What files do I need to copy to get the datasets? See the attached screenshot.
 

Attachments

  • Screen Shot 2015-01-14 at 11.21.14 AM.png
    Screen Shot 2015-01-14 at 11.21.14 AM.png
    78.8 KB · Views: 203

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
rsync will not move datasets. If you want an exact duplicate of the old pool's contents, you need to use zfs replication instead.
 

ian351c

Patron
Joined
Oct 20, 2011
Messages
219
Or create your datasets, then rsync.

Either way, rsync works at a file level, not a dataset or pool level.
 

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
Thanks for the reply guys. I've done the following.

  1. In my /mnt/internal point, create a "jails" dataset
  2. Set up 10 minute snapshots and replication. I use the localhost ssh, and point to internal/jails in my replication
  3. Let the replication run
However, at the end, I'm left with /mnt/internal/jails/jails, not /mnt/internal/jails as expected. See the attached screenshots. What am I setting incorrectly? The documentation in the help icon on the destination says that is should the destination dataset I want the data copied into. Should the destination just be "internal"?
 

Attachments

  • Screen Shot 2015-01-15 at 9.23.48 AM.png
    Screen Shot 2015-01-15 at 9.23.48 AM.png
    58.2 KB · Views: 205
  • Screen Shot 2015-01-15 at 9.23.26 AM.png
    Screen Shot 2015-01-15 at 9.23.26 AM.png
    16.4 KB · Views: 206

ian351c

Patron
Joined
Oct 20, 2011
Messages
219
Just make your remote destination /internal. That should work. I'm more familiar with doing this via CLI rather than the GUI.
 

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
Hey Ian,
Do you have any pointers to the documentation for the CLI? I'm a software engineer, and I write a distributed database, so I'm very comfortable with the terminal. I'd prefer to use it over the gui :)
 

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
Sorry, to be clear I was talking about the zfs sync, so that I can retain my snapshots. I currently have 4 minute snapshots on all my data sets, and replication via localhost ssh. I wouldn't find a way to do local zfs replication without the need to go through an ssh loopback. I googled for some documentation, but it doesn't seem to be as straightforward as running rsync between the 2 mount points.
 

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
So good news, I've managed to test replication successfull using this command

Code:
zfs send -R origdisk/jails@manual-2015-01-15 | zfs recv internal/jails


Bad news, it copies every snapshot that was taken on the 5 minute interval when I was trying to do near realtime replication. I have over 1,700 of these snapshots across the drive. I attempted to delete all these automatic snapshots with this command.

Code:
zfs list -t snapshot -o name | grep \\-2w | xargs -n 1 zfs destroy -d


Which does the following.

  1. list all snapshots by name
  2. Grep for all snapshots that have a "-2w" in them, since they are auto 2 week expiration snapshots
  3. Delete them

This worked for around 1000 of the snapshots. Now when I run it, I'm stuck at 641 remaining snapshots. I can't remove them, there isn't an error. It just appears to succeed, and never disappears from the list of snapshots. Any ideas? I want to remove all the extra snapshots before I do the manual sync, so that I'm not retaining them.

Thanks for all the help!
Todd
 
Last edited:

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
Found the issue, some of the snapshots were being held. I ran the following command to release the holds.

Code:
 nohup zfs list -t snapshot -o name | grep \\-2w | sort -nr | xargs -n 1 zfs holds -r | grep \\-2w | awk '{print $2 " " $1}' | xargs -n 2 zfs release -r


Which does the following

  1. List all snapshots
  2. Grep for them with the -2w in them, again from the 2 week
  3. Reverse sort them newest to highest
  4. invoke zfs holds -r to find all holds on the snapshot
  5. Grep that holding snapshot name line only
  6. Print the tag first, with a space, then the snapshot name
  7. Pass those 2 arguments to zfs release to release the holds
After that I ran this command again to remove the snapshots

Code:
zfs list -t snapshot -o name | grep \\-2w | xargs -n 1 zfs destroy -d


This cleaned up all my snapshots.
 
Last edited:
Status
Not open for further replies.
Top