SMB transfer to NAS slows to a crawl on folders with small files

itsjustfrank

Dabbler
Joined
Mar 25, 2023
Messages
10
Hello everyone,

Firstly this is my first post so apologies in advance if there's something not up to spec in this post!

I've been trying to research this issue for a few days now and despite my best efforts, I'm at a loss as to a solution or possible cause.

Specs:
TrueNAS-12.0-U8.1 Core
Motherboard make and model: Acer Aspire M3910
Intel H57 Chipset
CPU make and model: Intel i7 870 2.93Ghz
RAM quantity: 12GB @ 1333MT/s 2x4GB 2x2GB
Hard drives, quantity, model numbers, and RAID configuration, Including boot drives:
6 drives total
3x 6TB WD Red Plus 5640RPM 6Gb/s CMR 128MB Cache WD60EFZX
2x 6TB Toshiba N300 7200RPM 6GB/s 256MB Cache HDWG160XZSTA
1x 128GB Silicon Power SSD Boot drive SU128GBSS3A58A25CA
RAIDZ-1 config
Network controller: Realtek RTL8111E

The Issue:
Over the past few days, I've been trying to offload files from an old iMac to my NAS via SMB share. Whenever I get to a folder with many small files, transfer speeds can get to sub 1MB/s speeds. Conversely, when I transfer folders with large files I'm getting a consistent throughput of around 110MB/s. I also did an iperf3 test

I understand that with small files the transfer times will naturally get longer and I don't expect full gigabit speeds, but the current speeds which can range between a few KB/s to maybe 8MB/s at their best seem unusually low. I'm not sure if this is normal behavior or if there is something I can do to fix/improve it. For example, the same folder when sent to another external drive would range between around 20-50MB/s.

If anyone has any tips on what might be causing this please let me know. I recognize my hardware is less than ideal, especially the Realtek network controller. I am willing to invest a small amount into small hardware upgrades such as a decent NIC or maybe bump the RAM to the max 16GB but if I do so I'd like to have some confidence that that would actually improve the situation—I don't want to spend much on this machine as I plan to build a proper 10gigabit system later this year. I essentially just want to see these speeds improve in the interim to make writing data less painful. Thank you all for your time and assistance!

Iperf3 results:
Screen Shot 2023-03-25 at 4.29.56 PM.png
 
Joined
Jun 15, 2022
Messages
674
In simple terms, small files have a lot of overhead that goes with them, which you already know. Since TrueNAS only buffers writes in RAM for 5 seconds before forcing a 'commit' to disk the RAM cache can have ample free space for more information and yet stop allowing write requests so that it can clear out the write backlog. Copy-On-Write file systems aren't great performers when a bunch of tiny files get dumped to them, though the good thing is TrueNAS is designed to keep your data quite safe.

I'm guessing your hardware config (somewhat mismatched) is adding to the situation, but wouldn't worry so much about it as you're building a new system anyway and you've probably squeezed as much out of this one as practical. On your next one you may want to consider RAID-Z2, not for speed but rather data resiliency.
 

somethingweird

Contributor
Joined
Jan 27, 2022
Messages
183
Crazy Idea.

tar/zip up the small files into 1 big file - and send it over - at least it stored. then untar/unzip file from the TN side - then fix permission & ownership.
 
Joined
Jun 15, 2022
Messages
674
Crazy Idea.

tar/zip up the small files into 1 big file - and send it over - at least it stored. then untar/unzip file from the TN side - then fix permission & ownership.
Normally that's a good idea as bandwidth is usually the limiting factor, though in this case it's most likely the overhead of COW with checksum (disk bound due to IOPS). Assuming the disk performance in total is the bottleneck the tar-zip-unzip-write would take maybe 3x as long as a direct transfer. But again, that's normally a highly productive solution.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Normally that's a good idea as bandwidth is usually the limiting factor, though in this case it's most likely the overhead of COW with checksum (disk bound due to IOPS). Assuming the disk performance in total is the bottleneck the tar-zip-unzip-write would take maybe 3x as long as a direct transfer. But again, that's normally a highly productive solution.
Its not a bad idea, but I want to be clear that the problem likely resides on the client Mac moreso than the NAS however.
@itsjustfrank Your Mac has a single disk on it likely formatted with HFS+ if its old, vs your NAS which has many disks and ZFS caching. The old 5200 RPM spinner in your Mac likely just can't keep up.

If the problem were the other way around, where you were trying to access lots of small files on your NAS and it was slow, it would be a different solution. I'd instead recommend a special vdev.
 

itsjustfrank

Dabbler
Joined
Mar 25, 2023
Messages
10
Hi @NickF thank you very much for your reply. So the mac in question is regrettably not the culprit as I both replaced the HDD with an SSD which is formatted APFS, but more notably, upon more rigorous testing I've experienced this same level of performance with all my other machines both Macs and PCs. In saying that, if you might be able to speak more regarding your potential vdev solution I'd really appreciate it!
Thank you!
 
Top