Performance increased

Status
Not open for further replies.

Mike83

Dabbler
Joined
Oct 27, 2012
Messages
14
Hi

I am new to FreeNas and recently bought a microserver, I tried FreeNas and Nas4Free, the latter I couldnt get working (didn't connect to my network) so I installed FreeNas 8.2.0 to a usb drive, made my pool etc. But it was very slow writing to it, 9-10mb/s sometimes. The network would go a decent speed for a few seconds then stop for a few seconds.

Today I just upgraded to 8.3.0, created a new pool and activated the windows share. I was suprised it worked full speed, 70-80-90mb/s, with about 40mb/s if I used lzmb compression, this is great but what has changed?? My hardware is exactly the same, why wasn't it working at a good speed in 8.2.0? I can see one difference in 8.3.0 that it lists deduplication when you create volumes. Was that automatically enabled in 8.2.0 ? I wouldn't have enabled it myself, infact I saw no reference to it.

Also what is compression level and enable atime for? Most of the things I will be storing are compressed already, do I need it? I noticed 'inherit' and 'off' are faster than lzjb (recommended) what ones do most people use? Also not sure whether I need atime on inherit , on or off ?
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
These sounds like great questions to keep in mind while you read the documentation.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
From reading the "What's new" section of the FreeNAS 8.3 manual, I see a few things that could impact performance:

"Disable Nagle's algorithm in order to provide better LAN network performance at the expense of WAN performance."

"Add AES-NI hardware support for the Intel Core i5/i7 processors that support this encryption set. This support speeds up AES encryption and decryption." (if you are using an Intel system with hardware AES encryption, and your FreeNAS install uses encryption)

Besides, it moves to a newer (8.3) version of FreeBSD, which brings with it a newer kernel and newer hardware drivers, which may have performance tweaks and improvements in them.

I cant remember if deduplication was enabled by default in 8.2.0, but if it was, that could definitely have slowed the system down compared to running without.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Deduplication will NEVER be enabled by default. In fact, if you read the release notes for 8.3 it says something like "Think carefully before enabling deduplication. Then after you've thought about it use compression instead." Deduplication is one of those things that, in certain circumstances, can be awesome. But it can all go to hell in a split second if you don't have enough RAM to clean up an unclean zpool. You could suddenly find your data locked in an unclean pool forever until you get a machine with enough RAM.

The developers have made it very clear that deduplication, while supported, is something that you shouldn't ever consider enabling unless you know exactly what you are doing. I'm willing to bet there's less than 10 people on the planet that have deduplication enabled and actually know what they're doing while using FreeNAS.
 

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
I'm running into a performance issue as well. I have 10 - 750 GB SATA hard drives @7200 RPM and 4GB RAM in this one system. I used ZFS as my file system and created a Raidz2. So I have 7 active drives with 2 parity drives leaving my 10th drive marked as a spare. When I copy files from a Windows machine to my configured NAS I am only getting speeds of 2.5MB - 5MB/s. I have prefetching disabled and I have tried to set the ARC as best as I could compared to other systems that people have related to mine but it seems that I'm stuck at those terrible speeds.

The hardware I'm using is about 5 years old, standard motherboard with an Intel Processor that is 64bit. So it's not the greatest of systems but in my opinion the hardware should be able to at least produce writing at 30MB/s. Maybe ZFS was the wrong choice for this kind of setup but I'd like to hear back from some people first before I blow away the raid and try again with some different settings such as using UFS instead.

Any ideas to increase this performance would be greatly appreciated.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
I'm running into a performance issue as well. I have 10 - 750 GB SATA hard drives @7200 RPM and 4GB RAM in this one system. I used ZFS as my file system and created a Raidz2. So I have 7 active drives with 2 parity drives leaving my 10th drive marked as a spare. When I copy files from a Windows machine to my configured NAS I am only getting speeds of 2.5MB - 5MB/s. I have prefetching disabled and I have tried to set the ARC as best as I could compared to other systems that people have related to mine but it seems that I'm stuck at those terrible speeds.

The hardware I'm using is about 5 years old, standard motherboard with an Intel Processor that is 64bit. So it's not the greatest of systems but in my opinion the hardware should be able to at least produce writing at 30MB/s. Maybe ZFS was the wrong choice for this kind of setup but I'd like to hear back from some people first before I blow away the raid and try again with some different settings such as using UFS instead.

Any ideas to increase this performance would be greatly appreciated.

1.) When you say 5 years old, what are the specifics? With many drives CPU's can become the limit with ZFS at higher speeds, but at the 2.5-5MB/s level this seems unlikely to be your issue. Just in case though, have you measured CPU load during reads and writes?

2.) Your RAM is on the low side. From what I have read for ZFS the rule of thumb is about a gig per TB, but more is better. I would go with at least 8GB for this system.

3.) If deduplication or compression are on, turn them off. Especially with an older CPU, and lower RAM amount you have, these will have HUGE performance impacts.

4.) What network card are you using? Having this low speeds due to network issues is uncommon, but it is still recommended to use a dedicated Intel NIC for NAS boxes, as they are so much more reliable and perform better than that Realtek garbage that comes on board on most systems.

5.) How is your windows box networked to the NAS box? Is it wireless, or does it use unusual networking at any point (powerline, ethernet over coax?). The speeds you are getting seem pretty typical for Wifi. They advertise up to 600Mbit/s, but between devices not always having multiband capability, multiple antennas, signal interference, signal strength, and overhead, I have never seen one come anywhere close to this advertised figure. More typical is 65Mbit/s on 802.11N devices (54MBit/s on 802.11G devices) less about half for signal issues and overhead, which places you right about in the speed range you are experiencing. (remember, there are 8 bits in a byte, so divide by 8 to go from Mbit/s (or Mb/s) to Mbyte/s (or MB/s).

Wired Ethernet is more reliable.
Gigabit ethernet is good for about 125MB/s less overhead, so ~100MB/s.
100Mbit ethernet is good for about 12.5MB/s less overhead, so ~10MB/s
Old 10Mbit ethernet will only give you ~1MB/s

More exotic solutions like powerline networking and Ethernet over Coax are sometimes good, sometimes bad, depending on the solution, the power of the lines, interference on the power grid, etc. etc. When I once had a powerline networking system it would work well, but randomly drop out. Took me a while to figure out that it dropped out whenever my ex used the vacuum cleaner anywhere in the house.

Also remember that the max speed between two systems is going to be the slowest component between them. If your server and Windows box both have gigabit ethernet adapters, but the ports oin your router/switch are only 100Mbit (common with many routers provided by ISP's), guess what? Your max speed is going to be 100Mbit, or ~10MB/s

You can determine if the problem is networking related or local to the box by doing a local drive speed test.

a.) Go to console on freenas. Press the number for the shell and hit enter
b.) change to ZFS Volume folder (typically cd/mnt/volume-name, note, unix uses the opposite slash as compared to DOS/Windows)
c.) do a write speed test by writing a large file full of zeroes from /dev/zero to a test file (dd if=/dev/zero of=testfile bs=2048k count=200)
d.) do a read speed test by reading the file you just wrote to /dev/null (dd if=testfile of=/dev/null bs=2048k count=200)
e.) Delete the test file so that it doesn't take up space (rm testfile)

Be patient, these read and write tests may take a while, but it is important that they write large files, or you will see a disproportionate impact from drive cache and ARC over-inflating your speeds and ruining your diagnosis.

Now that you have read and write speeds, you can compare them to what you are actually getting when transferring data from your windows box. (they will be in bytes per second, so divide by 1024 twice to get comparable numbers to Windows MB/s readings.)

If the local speed figures are much larger than your actual speeds, then something in the network (server NIC, cabling, switch/router (wifi) or windows NIC) is likely (but not necessarily) the problem.
 

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
@mattlach

Thank you for replying to my post. I did a performance test locally moving files from one sub directory to another receiving the same results as if I was going over the network to a CIFS share. At the moment I am using the nic built into the MB rather than a separate Intel nic (Which I do have lying around) but I have refrained from replacing that do to the low write speeds I'm getting locally. Once I get the local write speeds figured out then I'll toss the Intel nic in. I know I have Dedup off but I will double check my compression settings and see where that gets me.

I will follow up once I look at my machine tonight and try tinkering with the settings.

Thanks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you have setup tweaks and stuff to try to "optimize" your system, I'd recommend you get rid of them. You shouldn't be trying to optimize your system until its up and working. Then, after lots of research should you start adding them in, and testing them 1 at a time.
 
Status
Not open for further replies.
Top