Help me Solve my ZFS Breathing!

Status
Not open for further replies.

mstang1988

Contributor
Joined
Aug 20, 2012
Messages
102
I've tuned as much as I can. I still can't get this damn thing to perform. On Freenas 8.2 I was fine, no breathing. Just an upgrade to 8.3 and it foobarred so I strted to retune but cannot get it to behave without a breath time to time. I'm getting around 110-120MB read, 120MB right until it breaths then it's screwed.

I'm getting around 150MB write to the disks with testing (2 x 2TB WD drives, 2 x 1TB 7200 RPM Seagates) and 300MB read so it's not disks. iPerf is around 950mbps both ways.

This is using CIFS. I have 16GB RAM (12GB dedicated to VM and native hardware ownership in the VM) and a 4 - core 3.2 ghz intel with 3 cores assigned to the VM so I'm not hardware bound. Again, it worked on freenas 8.2, 8.3 is giving me issues.

vmxnet3_load YES
vfs.zfs.arc_max 5978792355
vm.kmem_size_max 8303878272
vm.kmem_size 6643102617
vfs.zfs.prefetch_disable 1
vfs.zfs.txg.timeout 1
kern.ipc.somaxconn 65535
kern.ipc.nsfbufs 250000
net.inet.tcp.inflight.enable 1
net.inet.tcp.recvbuf_auto 1
net.inet.tcp.delayed_ack 0
net.inet.tcp.sendbuf_auto 1
vfs.zfs.vdev.max_pending 5
vfs.zfs.vdev.min_pending 1 True
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Did you do any tuning on 8.2? Can you post those values if you did?

You could try increasing the "max_pending" just a tiny bit at a time and see if that helps (maybe 8?). There are just too many possibilities and that's just a guess to try. I think it's supposed to breath some, but I've seen the same thing where it seems to "hold its breath". Do some reading about "gstat" or "zpool iostat" to see if you can find any other clues.
 

mstang1988

Contributor
Joined
Aug 20, 2012
Messages
102
Did you do any tuning on 8.2? Can you post those values if you did?

You could try increasing the "max_pending" just a tiny bit at a time and see if that helps (maybe 8?). There are just too many possibilities and that's just a guess to try. I think it's supposed to breath some, but I've seen the same thing where it seems to "hold its breath". Do some reading about "gstat" or "zpool iostat" to see if you can find any other clues.

I'll check to see if I have my old values (it's been months since I upgraded, I just finally spent the time trying to debug).

Yes, I have updated max_pending with no positive impact. I will read about gstat and zpool iostat.

This only occurs during writes, reads are around 110MB/s consistently with no drop.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It may come as a shock to you, but ZFS has a significant amount of overhead to all the wonderful aspects. ZFS write performance has a bunch of interplaying variables that make for "interesting" things. I've noticed that the attainable performance of a pool is rarely close to the potential performance of the underlying devices. I documented a pretty nasty case of this as part of bug #1531, and so far it seems that accepting some decreased performance nets you some significantly improved responsiveness (if you resolve things the way I did). My theory is that ZFS was really designed for pools so large that the IOPS available are rarely all consumed (take a look at the Sun Thumper, a 48 drive 4U beast) by the load in question, and/or that the pool can soak up the txg commits in the blink of an eye.
 
Status
Not open for further replies.
Top