Network Throughput dips when transferring many files

Status
Not open for further replies.

IonutZ

Contributor
Joined
Aug 17, 2014
Messages
108
Is there any reason why my network throughput would dip when transferring many small files from a client to the NAS? Both client and NAS have 2x Gigabit Interfaces set as LAGG (LACP) going through a LACP capable switch. Both read and write happens to very capable media. When there is a large file, throughput goes up to 950mbps, but for many small files it drops to as low as 10mbps... quite inconsistent behavior. I don't even know where to start looking to debug this issue, I'm hoping maybe someone has had similar experiences and was able to fix it. Also, my CAT6 cables are fine.

Capture.PNG
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
You seem to be getting hit a little worse than I've experienced on similar hardware. However, part of this is normal. There is a ton of overhead in samba and zfs when writing a huge number of tiny files. We are single connection, single threaded, etc... even the tiniest latencies seem to add up to poor overall throughput.

I almost puked the first time I saw it. Tested on full ssd pools, added an slog, you name it I tried it. I also tested it against other OSs with and without ZFS. Didn't spot a magic bullet. You can improve it, but can't eliminate it with CIFS. FTP can make multiple connections and mitigate a little. I'm no NFS wiz, but I believe there may be more tuning available there?

I also got a small uptick on the improvements to samba in 9.2.1.7. I've seen cyberjock mention 300MBps on his 10Gb gear getting slowed to 40 and living with it depending on the workload (i.e big files vs thousands of tiny writes.)

If you test moving those files on a local host, even with fast SSDs, you'll see a significant drop when it hits the masses of tiny files. The ratios are similar if you add some for network overheads.

Maybe someone will chime in with some parameters to tweak. But some of this is the nature of the beast, imho.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
There is a lot of overhead when copying a lot of small files.
Small files require the file system to constantly open, close file handles, recalculate buffer sizes, etc. for every single file you transfer on both ends of the transfers (sender and receiver). This is an issue regardless of OS and protocol. It's just how file IO works.
 
Last edited:

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
Mine also does this. My graph looks just like yours.
 
Status
Not open for further replies.
Top