Notes on zfs prefetch

Status
Not open for further replies.
Joined
May 27, 2011
Messages
566
I've seen a lot of people disabling zfs prefetch (zfs_prefetch_disable=1) citing improved performance. but all the testing I've seen so far will not benefit with prefetch enabled (zfs_prefetch_disable=0). Prefetch is designed to read more than requested and cache it with the hope that it will be used soon before it gets ejected from cache for other data. this has some overhead and only helps if the read head is bouncing around the hard disks working on multiple requests. i decided to do some benchmarking that is more real world then a single linear request.

The test consists of 8 workers, each reading in 4k chunks, linearly 80% of the time, and jumping 20% of the time working on an 80 GB file, my system has 8 GB of memory. here is what i found.

in all cases performance was better with prefetch enabled(zfs_prefetch_disable=0).

iops improved by 5.6%
throughput improved by 4.7%
latency decreased by 5.6%

Disabling zfs prefetch (zfs_prefetch_disable=1) will improve performance for single linear operations, ie, reading with dd. but you'll take a performance hit the more things you do in parallel.

all of my drives are the 'green' drives, slower spin == longer latency. when i have more time (it's 3 am) I'll try tweaking how much it caches and how large the cache is. I'm hoping to get even more performance. when the head stays put, my drives read fast, but they are slow to seek to a new spot. reducing that should give me a boost.
 
Joined
May 27, 2011
Messages
566
so can't sleep, tried more tweaking.

my first choice:
vfs.zfs.vdev.cache.bshift = 20 # cache 1 MB instead of 16 KB
vfs.zfs.vdev.cache.size = 67108864 # 64 MB cache per drive instead of 10 MB
vfs.zfs.vdev.cache.max = 524288 # reads smaller than 512KB will be increased to 1 MB

about the same for everything except for latency which was basically shot to hell, i was kind of expecting that.

Next was
vfs.zfs.vdev.cache.bshift: 18
vfs.zfs.vdev.cache.size: 33554432
vfs.zfs.vdev.cache.max: 131072

this time i saw some real results compared to stock prefetch cache settings

iops improved by 8.4%
throughput improved by 9.1%
but latency Increased by 645.6% (yes, six hundred)


my last test
vfs.zfs.vdev.cache.bshift: 17
vfs.zfs.vdev.cache.size: 251658240
vfs.zfs.vdev.cache.max: 65536

iops improved by 8.3%
throughput improved by 9.1%
but latency Increased by 646.5% (yes, six hundred again)

so for modest gains in throughput and iops, you can take a massive hit in latency. I'm reverting back to stock. i may toy around later with much smaller tweaks, I'm using a heavy hand.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
As I understand it, prefetch is automatically enabled if you have more than 4GB of RAM. I know folks are enabling it manually even if they have 4GB or less to increase read speeds. Your testing provides great data to those who want to enable prefetch. I might even take a stab at it once I'm done fiddling with something else I'm working on.
 

globus999

Contributor
Joined
Jun 9, 2011
Messages
105
Interesting data, however, I am curious as to how will performance react to a large number of small files being read or written.
Here is the weird thing.
I fiddled around a tad previously with a test box (don't have it any more) and found that with prefetch enabled and working with a large number of small files, the system bogs down while reading and writing.
Now, I would understand why would the system do so on reading, but why on writing?
Anyhoo, this is just a set of anomalies I saw while performing data migration in and out of an FN8 x32 box.
Don't know what would happen on an x64.
 

pauldonovan

Explorer
Joined
May 31, 2011
Messages
76
Small request: Can we please specify zfs_prefetch_disable=0 or zfs_prefetch_disable=1 when talking about this? The ZFS developer who introduced this double negative needs to be shot :p
 
Status
Not open for further replies.
Top