Why would prefetch efficiency be so low during an SMB large file transfer?

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
Hello,

I have the system shown in my signature. It has a 3x 4TB mirrors pool on it which has a 20GB large file hosted via SMB. When transferring this file, I noticed performance was not what I would expect from the six disk pool - in the low 200 MB/s over 10 GBE to a pcie SSD on the receiving machine. I checked netdata and saw that prefetch efficiency was very low. Any idea why this would be? I thought that sequential transfers are ideal for prefetch, so I was expecting much higher efficiency.

In the screenshot below, the transfer started around 23:36:00 and ended around 23:37:30. Note that prefetch efficiency is the second chart. I included the other charts for some more context.

1570597270445.png
 

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
By the way, a similar thing happens without SMB, when using the dd command locally, with a similarly poor throughput result.
dd bs=20M if="/mnt/mypool/largefile" of=/dev/null

1570600161623.png
 

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
I wasn't able to determine the cause. As far as I know, it is still occurring. It doesn't bother me on a daily basis, it just annoys me that I don't understand it and the performance isn't what I want it to be.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
I don't necessarily have the same speed bottleneck as I'm able to hit 1GBps over my 10gbe network, but I have noticed the prefetch hits for large files being transferred looks identical to yours where the missed are at 100%. However, if I re-transfer the same file again right after the % reverses and goes to 100% hits. On the surface this looks like it's not measure the ability to read ahead sequential data at all from disk (which I thought is what prefetch data was), but only to do so from ARC.
 
Top