RSYNC Speed between local pools

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
I have a 2 vDev Z2 production pool that I backup to a 2 vDev Z2 backup pool. Speed is never a big deal when doing differentials each night. Recently, I destroyed my old 12 drive 1 vDev Z2 backup pool in order to re-arrange it to a 2x9 pool and the initial RSYNC is going to take a loooong time.

Typical RSYNC speeds seem to be on the order of 50MBps according to RSYNC and zpool iostat. I use RSYNC as I selectively backup some items and others I do not. Given I have more backup space than production space now, I guess I should have considered replication - which I still may in the future.

Anyway, just wondering why RSYNC seems so slow. The HDDs were able to test at an average of ~150MBps each. Right now, write rates seem to be at about 3MBps on each of the backup pool HDDs. I am using the --inplace modifier on RSYNC. Did not seem to make much of a difference on speeds.

This is not a big deal, just wondering if this is normal when using RSYNC.

UPDATE: So, I now have my system doing parallel RSYNC tasks (tmux is the bombdigity). Transferring at around 180MBps now. What is the limiting factor on RSYNC I wonder...

Thanks,
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
UPDATE: So, I now have my system doing parallel RSYNC tasks (tmux is the bombdigity). Transferring at around 180MBps now. What is the limiting factor on RSYNC I wonder...
rsync is designed to leave system resources available so the server can still do normal work while the rsync is running. In other words, it is supposed to be slow. At least that is what I was told years ago when I asked the same question. You should consider using ZFS snapshots and ZFS send and receive. It is much faster. I recently used it for the same thing you are doing and copied my entire pool in just a few hours.

Once you have a snapshot on the pool you want to copy:
zfs snapshot -r Emily@manual-19Aug2018 and an empty target dataset on the destination pool, the command is super simple. This is the command I used, just change the names.
zfs send -R Emily@manual-19Aug2018 | zfs receive -F Irene
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. I used that for the initial data load but I still use rsync for updates to keep the backup current.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
rsync is designed to leave system resources available so the server can still do normal work while the rsync is running. In other words, it is supposed to be slow. At least that is what I was told years ago when I asked the same question. You should consider using ZFS snapshots and ZFS send and receive. It is much faster. I recently used it for the same thing you are doing and copied my entire pool in just a few hours.

Once you have a snapshot on the pool you want to copy:
zfs snapshot -r Emily@manual-19Aug2018 and an empty target dataset on the destination pool, the command is super simple. This is the command I used, just change the names.
zfs send -R Emily@manual-19Aug2018 | zfs receive -F Irene
In the future I will use ZFS replication. Thanks for the answer and it does make sense.

Screenshot 2018-12-02 19.14.13.png


It certainly does take some time... But look at all that space!! :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
In the future I will use ZFS replication. Thanks for the answer and it does make sense.
When I did a zfs send zfs receive it completely maxed out my CPU and disks for the duration of the copy, thankfully it only took about 4 hours to complete because it ran so quickly.

I have a good bit less data on my home system than you have, but it is always nice to be able to go as fast as the hardware will let you.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
I feel like I should almost delete the backups and try it with replication just to see :) I have enough space, perhaps I will try replication on a large directory just to see the rate.

Thanks again.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Tried send and receive. Wow. Much faster. Close to 400MBps on a 1TB send. In the future, I will use send/recv to do the first copy for sure. I still really like the granularity of RSYNC for daily backups though.

Cheers,
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Tried send and receive. Wow. Much faster. Close to 400MBps on a 1TB send.
That is why I mentioned it. It scales with the number of vdevs. I have seen it fluctuate between 800 and 1000 MBps when I was copying between my two big pools. When I am sending to my backup pool (with only one vdev) it is half as fast.
I still really like the granularity of RSYNC for daily backups though.
Me too.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
the granularity of RSYNC for daily backup
Do you mean the rsync's ability to work on a per directory / per folder basis as opposed to zfs send / recv working with datasets?

Edit:
BTW
the command is super simple. This is the command I used, just change the names.
zfs send -R Emily@manual-19Aug2018 | zfs receive -F Irene
I thought it can be done using the GUI anyway. Even though the command is simple, can doing it without GUI have some drawbacks, e.g. hiding the sync tasks from GUI or whatever?

Sent from my mobile phone
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Do you mean the rsync's ability to work on a per directory / per folder basis as opposed to zfs send / recv working with datasets?
That is what I mean by it. I don't break my storage into very many datasets, so if I just want to copy a folder, it is easier to do it with rsync.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I thought it can be done using the GUI anyway. Even though the command is simple, can doing it without GUI have some drawbacks, e.g. hiding the sync tasks from GUI or whatever?
The GUI is good for scheduling snapshots, and making a manual snapshot is not difficult, if I recall correctly, but making a manual snapshot from the command line is super simple also. I have never tried to do a zfs send | zfs receive from inside the GUI. I honestly don't know if it is possible, but I think I recall there being some feature for sending a snapshot to another system as a backup. If you are doing internally, within the same system, from one pool to another, I think you need to use the command line for that.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
The GUI is good for scheduling snapshots, and making a manual snapshot is not difficult, if I recall correctly, but making a manual snapshot from the command line is super simple also. I have never tried to do a zfs send | zfs receive from inside the GUI. I honestly don't know if it is possible, but I think I recall there being some feature for sending a snapshot to another system as a backup. If you are doing internally, within the same system, from one pool to another, I think you need to use the command line for that.
You can use the replication tasks locally from the GUI but it seems WAY slower than a CLI send/recv command. I used replication for a couple specific datastores only and it works great. Speed is similar to RSYNC on initial replication and then the incremental snapshots are so small, speed is not an issue.

Cheers,
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
Needed to do this today, as I have some sharing issues ( another topic )
So Im making a new Dataset, with other Case Sensitivity setting, and I needed to move data from "Parent" dataset into the new one.

So the datasets are as follows

/mnt/VOLUME/Dataset/ - Here is an Directory that contains files
to
/mnt/VOLUME/Dataset/NewDataset - Here is where im moving the files.

Did following :

zfs snapshot -r Dataset@today

zfs send -v Dataset@today | zfs receive -F Dataset/NewDataset

I did not do -R as its a subfolder ? got the errors
cannot send Dataset@today recursively: snapshot Dataset/.system@today does not exist
cannot receive: failed to read from stream

---

I think it works ? as I see an increase in Volume Size on the FreeNAS Gui,
however when browsing the folders, I cant see the new files...

according to Oracle
The file systems are inaccessible while they are being received.

So ill guess ill wait and see if it works :)
 
Top