RSYNC over 10gb link

Pabs

Explorer
Joined
Jan 18, 2017
Messages
52
Hi,

I have a NAS to NAS 10gb link, currently trying to Backup these VM replicas from the one NAS to a different one using RSYNC.

While iperf returns 7.77 Gbits/sec, RSYNC fluctuates only at around 47.42MB/s.

Is there any resolution to this, asking as have seen several threads about RSYNC and 10gb connections with no resolution to them?

What could be an alternative to perform these backups at the fastest speed as possible?

Thanks!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi,

I have a NAS to NAS 10gb link, currently trying to Backup these VM replicas from the one NAS to a different one using RSYNC.

While iperf returns 7.77 Gbits/sec, RSYNC fluctuates only at around 47.42MB/s.

Is there any resolution to this, asking as have seen several threads about RSYNC and 10gb connections with no resolution to them?

What could be an alternative to perform these backups at the fastest speed as possible?

Thanks!
See if my suggestions in this post helpful:

https://www.ixsystems.com/community...rmissions-support-for-windows-datasets.43973/
 
D

dlavigne

Guest
What's the full output of ifconfig (within code tags please)?
 

Pabs

Explorer
Joined
Jan 18, 2017
Messages
52
What's the full output of ifconfig (within code tags please)?

Here you go:

Code:
cxl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    options=ec07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
    ether 00:07:43:51:6c:70
    hwaddr 00:07:43:51:6c:70
    inet 192.168.50.10 netmask 0xffffff00 broadcast 192.168.50.255
    nd6 options=9<PERFORMNUD,IFDISABLED>
    media: Ethernet 10Gbase-Twinax <full-duplex,rxpause,txpause>
    status: active
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi, thanks for the link.

My issue is not so much with permissions but rather performance, at the moment I get over 100MBps with my 1gbps connection.

Wanted to super charge it to 10gb but that is where it gets nowhere near even 1gbps :s
I understand. The title of that thread is misleading; the rsync script I posted there also contains code to improve its performance, which may help you get the most out of your 10G network; it certainly helps on mine. Caveat: you should see performance improvements, but nowhere near line rates.
 

Pabs

Explorer
Joined
Jan 18, 2017
Messages
52
I understand. The title of that thread is misleading; the rsync script I posted there also contains code to improve its performance, which may help you get the most out of your 10G network; it certainly helps on mine. Caveat: you should see performance improvements, but nowhere near line rates.

Will give it a shot, another thing too is that there was a thread, will quote it here once I find it about running it as a process rather than via SSH and it does make a difference, mind you, not close to 5gbps but it peaks at 2gps, so at least get to double it I guess
 

Pabs

Explorer
Joined
Jan 18, 2017
Messages
52
I understand. The title of that thread is misleading; the rsync script I posted there also contains code to improve its performance, which may help you get the most out of your 10G network; it certainly helps on mine. Caveat: you should see performance improvements, but nowhere near line rates.

At the moment I am running RSYNC with 'times' and 'recursive' on via the FreeNAS UI, not using SSH as a test and the below as extra options.

Code:
-W --password-file=/path/to/file/rsync.pwd --log-file=/var/log/rsync_qnapnas3_to_freenas_veeam.log --exclude=.streams/ --exclude=.DS_Store --exclude=.AppleDB/ --exclude=.AppleDesktop/ --exclude=.AppleDouble/ --exclude=.digest/ --exclude=.@__thumb/ --exclude=@Recycle/ --exclude=@Recently-Snapshot/ --exclude=.@__qini/ --exclude=.@upload_cache/


By Doing this I hover at around 2.2gbps at best, with dips down to 0.5gbps, assume this happens when not moving large files.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
I know the thread is old (don't give me a Necromancer title, please), but for anyone searching for ways to speed up rsync, with
Code:
rsync -av --progress -e "ssh -T -c aes128-ctr -o Compression=no -x" /mnt/tank/data emk2203@sc743t.lan:/mnt/tank/data
, I am able to get a mean transfer rate of 3.37 GBit/s, with max of 4.74 GB/s. This rate is constant now for several hours, with a system load of 1.77 on a Core i3 CPU.

Hope this helps. It's a pretty satisfying result imho. 20 TB need only 15 hrs to transfer.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,579
You think it's because you're disabling compression that yields the most notable benefit?
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
This contributes, but I think h/w supported. real fast crypto (aes128-ctr) contributes more. When I have some time, I might test out what speed you get when these components are switched on/off, for now, I am happy to transfer my 20TB without much delay.
 
Joined
Oct 22, 2019
Messages
3,579
I guess it's implied that with modern CPUs, using any of the AES ciphers is faster than the default chacha20-poly1305 cipher, because they leverage AES-NI hardware acceleration; yet SSH to this day still defaults to chacha20-poly1305 (highest priority) because it's significantly faster than the other ciphers when no hardware acceleration is available?

"Least common denominator" is favored, even though it cripples the performance of most users?
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
Yes, this design decision is questionable. Here is a comparison with AES256-CTR, where h/w-accelerated AES is around 50% faster than chacha20-poly1305. If you go down to aes128-ctr, it should be many times faster than chacha20-poly1305.

But aes128-ctr is weaker than chacha20-poly1305. Doesn't matter here, I don't transfer over the internet.
 
Joined
Oct 22, 2019
Messages
3,579
Yes, this design decision is questionable.
Have mercy on my soul.

I might just file a "feature request" for TrueNAS to change the default cipher over SSH to aes128-ctr. This small change can increase throughput over local network transfers. Besides, who is using TrueNAS (consumer or business) without hardware that supports AES acceleration these days? o_O

Surely this change is trivial to implement as the default for TrueNAS?

As it currently stands now, from my TrueNAS Core 13.0-U3.1 server:
Code:
The default is:

                   chacha20-poly1305@openssh.com,
                   aes128-ctr,aes192-ctr,aes256-ctr,
                   aes128-gcm@openssh.com,aes256-gcm@openssh.com


And before someone says "You can add your own auxiliary parameters under Services -> SSH -> Auxiliary Parameters". Understand that I'm speaking from the perspective of "out of the box" user experience. (Besides, I believe when done in this GUI page, it only affects sshd_config, not ssh_config.)

Good defaults are important.
 
Last edited:
Top