ZFS Write Performance Issues - FreeNAS 8.0 RELEASE

Status
Not open for further replies.

ChromoX

Dabbler
Joined
Jul 1, 2011
Messages
10
I am running 8.0 RELEASE and I am having some issues getting everything to run right.

For example, I have 4 2TB drives in a raidz1 zpool. The maximum write speed on average I can maintain is 4megs. I have used iostat to see what was going on....

Bursts everywhere. I will get a burst of 40megs then nothing for 3 to 5 seconds, then another burst...

I have tried tweaking things such as the kern.maxvnodes=250000/vfs.zfs.prefetch_disable=1/vfs.zfs.txg.timeout=5/vfs.zfs.txg.write_limit_override=256m

And it helped a little bit, but not much. Whats wrong?
 

Tekkie

Patron
Joined
May 31, 2011
Messages
353
Off the deep end eyh? :)

What does zpool status say? A bad drive perhaps?

Do a search on the forum for testing using dd and try some of those tests and report back you numbers.
 

ChromoX

Dabbler
Joined
Jul 1, 2011
Messages
10
So I just posted this in the IRC channel trying to find some help with this issue:

Hey everyone! I have been messing around with FreeNAS for the last couple of hours and I can't get my write performance to be something decent. I have testing my writing speed by using "dd if=/dev/zero of=/mnt/tank" and it returns telling me that I have a 100meg/s write speed. I have also tested my point to point connection speed using ipref which said 107megs/s. Now the problem is whenever I start transferring something I only get about 3megs a second write using zpool iostat to measure that, and also its very bursty. I have looked up tweaks for ZFS such as the prefetch_disable and the txg.timeout and they helped a little, but I am still getting horrible write speeds and writes are very bursty can anyone help me?
 

ChromoX

Dabbler
Joined
Jul 1, 2011
Messages
10
Another update:

So I installed 8.0.1 BETA 3 hoping to get some speed improvements from the 4k block sizes and I ran "dd if=/dev/zero of=/mnt/tank" again and got about a 50% improvement and I am now writing to zfs with that command at 200megs/s

But....

My network write performance using AFS is abysmal. It is very bursty and sometimes just stalls for a couple of seconds.

I am confused by this result because when I run iperf between my computer that is transfering the data to the NAS I get 100MB/s over ethernet.

So my next question is... Is AFP slowing everything down? Because writing to the ZFS file system I can do at top speed, and network performance between my box and the NAS is maxing out a gigabit line.

Using scp I can get a sustained 10 MB/s on wireless and 20 MB/s on ethernet...

Does the AFP service suck?

What could be causing this?
 
Joined
May 27, 2011
Messages
566
what kind of hardware are you running. tell us Everything.


cpu, memory, disks, disk controllers, 32/64, motherboard, everything else i for got to ask for. more is better.
 

ChromoX

Dabbler
Joined
Jul 1, 2011
Messages
10
CPU/Motherboard: GIGABYTE GA-D525TUD Intel Atom D525@ 1.8GHz 1M L2 cache BGA559 Intel NM10
Memory: 4 GB of DDR3 Kingston Memory - KVR1333D3N9/2G
Disks: 4 x Samsung EcoGreen HD204UI
Disk Controllers: 2 SATA ports -> Intel NM10; Other 2 SATA ports -> GIGABYTE SATA2
IO Controller: iTE IT8720 chip

Running: FreeNAS 8.0.1 BETA3 amd64 Full Install

Spec Sheet for Motherboard: http://www.gigabyte.com/products/product-page.aspx?pid=3549#sp

Other notes:

Further trying to figure out the situation; when I installed 8.0.1 BETA I used 4k blocks to increase speed, and this did increase speed a little... Very little.

Rsyncing over SSH allows for me to get a sustained 9.5 MB/s, and while this is better I still feel like something going on, definitely when I get 100 MB/s transfer to the box and I can write at 100 MB/s a second with dd.

I opened two rsync connections over SSH and total it allowed me to get around 20 MB/s sustained total write, but every 8 seconds almost on the dot I get a reduction of about 10 - 15 MB/s for a second.
 
Joined
May 27, 2011
Messages
566
I have used iostat to see what was going on....

would that be zpool iostat or just regular iostat? are you doing an instantaneous read, or are you reading over a span of time like 10 seconds?

try

zpool iostat 10

ignore the first read and then look at the rest, it will report every 10 seconds. if you don't tell iostat to read over a period of time, the results will be bad.

also when you are doing the test, can you check the top few processes with the 'top' command, let us know how much cpu time they are taking up.
 

ChromoX

Dabbler
Joined
Jul 1, 2011
Messages
10
I have figured out from using "dd if=/dev/zero of=/mnt/tank" that this is probably not a ZFS issue and is something else. And yes I did use zpool iostat 2/10/30 and it would read between 3-4 MB/s most of the time.
 

ChromoX

Dabbler
Joined
Jul 1, 2011
Messages
10
So after thinking about it a little more. I think this may be a AFP problem. AFP is the only thing that is very very very slow. Does FreeNAS have the latest server software for AFP? Are there any tweaks for it?
 

ChromoX

Dabbler
Joined
Jul 1, 2011
Messages
10
So I got it fixed... Somewhat.... Transfering large files and running zpool iostat at the same time yields this result...


persistence# zpool iostat 2
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
tank 3.41T 3.84T 4 117 369K 12.3M
tank 3.41T 3.84T 0 763 0 88.2M
tank 3.41T 3.84T 0 0 0 0
tank 3.41T 3.84T 0 816 0 100M
tank 3.41T 3.84T 0 732 0 84.3M
tank 3.41T 3.84T 0 35 0 4.24M
tank 3.41T 3.84T 0 695 0 79.3M
tank 3.41T 3.84T 0 0 0 0
tank 3.41T 3.84T 0 759 0 87.4M
tank 3.41T 3.84T 0 0 0 0
tank 3.41T 3.84T 0 36 0 4.45M
tank 3.41T 3.84T 0 1015 0 119M
tank 3.41T 3.84T 0 0 0 0
tank 3.41T 3.84T 0 767 0 88.3M
tank 3.41T 3.84T 0 0 0 0
tank 3.41T 3.84T 10 105 41.9K 12.9M
tank 3.41T 3.84T 0 596 0 67.0M
tank 3.41T 3.84T 0 0 0 0
tank 3.41T 3.84T 0 792 0 91.3M
tank 3.41T 3.84T 0 168 0 21.1M
tank 3.41T 3.84T 0 112 0 13.8M
tank 3.41T 3.84T 0 488 0 52.1M


I have already tuned vfs.zfs.txg.timeout = 5, anything else I might be able to tune to flatten this out more?
 

pauldonovan

Explorer
Joined
May 31, 2011
Messages
76
I have already tuned vfs.zfs.txg.timeout = 5, anything else I might be able to tune to flatten this out more?

I have very simliar hardware, though I've just got two 4k drives in a ZFS mirror. Try changing vfs.zfs.txg.write_limit_override (you can do this using sysctl while the system is running). I've got a value of 268435456 (256MB) but it really depends on your hardware and the disk cache sizes, so play around with it. It may limit the peak transfer rate, but should help to smooth it out some more.

I can get about 60-70MB/s using AFP from a new iMac for both read and write with one drive, and perhaps 50MB/s in a mirror configuration.

Is your networking equipment capable of jumbo frames? The recent FreeNAS releases support jumbo frames on the Realtek chip. Add or edit you re0 network interface in the GUI and put 'mtu 9000' in the Options field.
 
Joined
May 27, 2011
Messages
566
Intel Atom D525@ 1.8GHz

Rsyncing over SSH allows for me to get a sustained 9.5 MB/s, and while this is better I still feel like something going on, definitely when I get 100 MB/s transfer to the box and I can write at 100 MB/s a second with dd.

that seems pretty reasonable to me, atoms are not known for speed and rsync, ssh are pretty intense, there is a lot of calculations being done for SSH and depending on how you're using rsync it can be taxing too.
 

esamett

Patron
Joined
May 28, 2011
Messages
345
fyi: I also have slow zfs write, pulsing pattern, about half of read speed which is fairly steady-state. I have adequate cpu and memory headroom. I am running 10x2tb 512b sector size drives.
 
Joined
May 27, 2011
Messages
566
fyi: I also have slow zfs write, pulsing pattern, about half of read speed which is fairly steady-state. I have adequate cpu and memory headroom. I am running 10x2tb 512b sector size drives.

what are your system specs and how are the 10 drives arranged?
 

esamett

Patron
Joined
May 28, 2011
Messages
345
my system specs

I have an AMD dual core 5000(?) cpu on a ECS Geforce mobo and 4GB DDR2 ram. I have 4 on-board SATA2 connectors. I have a Promise 2 SATA2 PCI and a Generic 4 SATA2 PCI board. I have an Intel Pci-express gigabit NIC.
 
Joined
May 27, 2011
Messages
566
you have 6 disks on the pci bus. I'd be that's the issue. PCI is a bus, one device operates at one time. so either 4 disk controller talks or the 2 disk controller talks. I'd also bet that only 1 of the 6 disks can operate at once (can't say for sure without knowing more about the controller's architecture).
 

esamett

Patron
Joined
May 28, 2011
Messages
345
Is PCI Bus the culprit or not?

(I am not a hardware expert by any means.) Is it reasonable to assume that the bandwidth limitation from running disc controllers off the PCI Bus should have a symmetrical impact on server reads and server writes? If so, then my system write performance cannot be PCI Bus limited. My read performance (from Windows) is about double that of write performance. The read data transfer graph is fairly smooth but the write data transfer curve is very "bumpy" - looking somewhat like an "EKG."

I posted graphs here:
http://support.freenas.org/ticket/424
 
Joined
May 27, 2011
Messages
566
well when you're reading, the FreeNAS box is in charge and it can get the data in an orderly fashion. when you're writing, you're throwing as much data as you can against the wall and letting FreeNAS deal with it.

You also need to remember that reading is very different from writing. when you're reading, you're mostly ignoring the cache on the FreeNAS box, so you're limited by the disk speeds. when you're writing, you are using the cache, so data is tossed in the caches and then to the disks. you can write to the cache very fast, but when it fills up, it has to tell the sender to back off as it has no place to place the data. this gives you the highs and lows. i have a pretty beastly setup, i can initially write at 100MB/s after 5 or 6 GB of data, my cache is full and i get a drop to about 50MB/s for 100ms or so, then it goes back up and flutters between 80MB/s and 95MB/s. (just tested this with a 16 GB file)


try getting 4 disks, add them to the motherboard controller and check the performance.
 

esamett

Patron
Joined
May 28, 2011
Messages
345
It is not practical to change hardware for me at this time. Hopefully the developers can work out some software tweaks for better performance.
 
Joined
May 27, 2011
Messages
566
It is not practical to change hardware for me at this time. Hopefully the developers can work out some software tweaks for better performance.

you may not want to admit it, but your limit is your hardware.

do you have any pci-e slots? if you have a 1x slot, you could migrate 4 drives off your PCI bus for about $50.
 
Status
Not open for further replies.
Top