Slow performance on a decent server with RAIDZ3 array, what is the root cause?

Status
Not open for further replies.

freenas-supero

Contributor
Joined
Jul 27, 2014
Messages
128
Hello! First time posting here for me!

So I have installed Freenas onto an older supermicro super server running with 2x xeon L5420 (quad cores at 2.5GHZ) and 48GB DDR2-667 ECC RAM.

The server is using a M1015 flashed to IT mode with a mixture of 8X SATA2 & SATA3 drives (3 hitachis deskstar, 4 seagates barracudas and 1 WD caviar green) assembled as a single vdev RAIDZ3 for my pool. I plan adding another 8 drives in the future as 2 vdevs of 4 drives each (chassis with 16 hotswap caddies).

CPU usage seems very reasonable (usually below 20%) and RAM is maxed out (almost) with in average 38-40GB out of 48 as it would be expected with ZFS...

Each datasets is compressed using LZ4 and exported to a NFS share which I mount on my Linux boxes.

Freenas server is currently connected to my LAN via single gigabit connection but I am expecting to use LACP soon with a managed switch to increase bandwidth. I currently use a cheap D-Link green 8port switch. When moving stuff in-out of the NFS share I usually get transfer rates of about 35-38MB/s which is pale in comparison to my Proxmox virtualization server which uses 15krpm SAS drives and a M5016 RAID controller (sustained rates around 80-90MB/s).

Can I blame the single vdev for poor performance? The NFS share? The older hardware I use? This server will mostly be a backup server and unified storage appliance for media streaming and such, and consequently will not be used for performance related applications, unless I can tweak things up to get a decent speed.

Any tips from the gurus are much appreciated for a noob like me!!

thanks!
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
How does your NFS performance compare with CIFS and SFTP? Have you tried getting stats if you cut the switch out of the equation (directly connect workstation to FreeNAS server)?
Note that your vdev will have the IOPS of a single disk and so if you're doing something that is IOPS intensive then you will see significantly less performance than an array of 15k RPM drives. Additionally, I have heard that RAIDZ3 performs significantly worse than RAIDZ2, but I have not tried to quantify the performance difference. If you have the time, you may want to test with RAIDZ2 instead of RAIDZ3.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm using a RAIDZ3 and while it is marginally slower the bottleneck is always your LAN or your IOPS. I can do almost 900MB/sec reads from my pool locally, so clearly my measly 1Gb LAN is the bottleneck.

If you are doing NFS on Windows that pretty much explains your problem. NFS on Windows sucks. Really. Really. Bad.

Read some of our stickies and do some performance testing of your server to find out if there is a limitation. You need to rule out whether this is a networking problem, a protocol problem, or a pool problem.
 
Status
Not open for further replies.
Top