6 disk stripe with low read performance

Status
Not open for further replies.

brashquido

Dabbler
Joined
Jun 30, 2014
Messages
17
Hi All,

Have a low end server I am just wanting to use as a temporary dumping ground for non essential media files. I'm really not expecting much performance wise and am not fussed if a disk fails and I lose all the data. Only real requirements are to maximize storage capacity while using as little power as possible.

My system is consists of an Intel Atom D2700 with 4GB of RAM (that is the maximum for the system). I am using a combination of the 2 x onboard ICH7 SATA II connectors as well as a Silicon Image Sil3114 based 4 port PCI controller for the 6 x 1.5TB Western Digital green drives I have.

I installed the 64-bit version of FreeNAS onto a USB stick and created my ZFS in a stripe configuration with dedup and compression disabled. I setup CIFS and FTP and started dumping data onto it. I'm getting 55~60MB/s write speeds for both CIFS and FTP which is perfectly fine with me. When it comes to read speed it appears to be topping out at around 15MB/s for both CIFS and FTP. This seems a bit low to me (especially for a stripe) as I know I can get up around 80~85MB/s on these drives individually .

I've not tried UFS yet, but is there anything I can try with ZFS tuning to boost read speed without adding extra hardware?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You need at the very least 8GB of RAM, no discussion. That processor isn't exactly very appropriate either.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Well let me try to be more helpful than Eric.

You have already said you don't give a damn about the data really, so I won't jump on the "8GB or die" bandwagon, since if the loss of the data doesn't matter to you, then there's no particularly pressing need to obey all of the recommendations.

Your ZFS pool (9TB pool) is definitely ARC-starved with 4GB, and as Eric said, FreeNAS in general is starved with only 4GB using ZFS for any pool.

Also, you have a bunch of hardware interfaces that I am personally not familiar with, so I don't know what their impact is.

I suspect the performance that you're seeing on the read side is not surprising given the processor and small RAM available and everything else. ZFS is a non-trivial undertaking for the computer, and you're giving it precious little to with, and giving it extra things to worry about on top.

I personally think you're getting the performance you can expect, and you're not doing anything wrong.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well let me try to be more helpful than Eric.

You have already said you don't give a damn about the data really, so I won't jump on the "8GB or die" bandwagon, since if the loss of the data doesn't matter to you, then there's no particularly pressing need to obey all of the recommendations.

Your ZFS pool (9TB pool) is definitely ARC-starved with 4GB, and as Eric said, FreeNAS in general is starved with only 4GB using ZFS for any pool.

Also, you have a bunch of hardware interfaces that I am personally not familiar with, so I don't know what their impact is.

I suspect the performance that you're seeing on the read side is not surprising given the processor and small RAM available and everything else. ZFS is a non-trivial undertaking for the computer, and you're giving it precious little to with, and giving it extra things to worry about on top.

I personally think you're getting the performance you can expect, and you're not doing anything wrong.

Thanks for being more eloquent than I was ;)
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Ouch, 15 MB/s... I could get better than that out of my USB 2.0 external drives, lol.
 

brashquido

Dabbler
Joined
Jun 30, 2014
Messages
17
Thanks for the replies. I tried using UFS with the same performance results, so perhaps the bottleneck is with the PCI controller. I suppose I was just hoping the differential between read and write speed was a config issue. Interestingly the CPU doesn't get above 40% while writing and is under 10% for reads. Suspect this may change though if the hardware had capacity to double the ram.

15MB/s is not really going to be able to effectively service 9TB, so I may have to look at other options. Cheers.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
What type of NIC are you using? If it is a RealTek, replace it with an Intel one.

ZFS disables prefetching on systems with 4GB of RAM or less. You may see better performance if you manually enable prefetching by setting the following "Tunable" and rebooting:

vfs.zfs.prefetch_disable = 0

I would recommending leaving LZ4 compression enabled. Your CPU should have plenty of horsepower available to decompress data at Gigabit network speeds. Even an old Pentium 4 processor can handle that. Here are some LZ4 benchmarks I ran a while ago: http://forums.freenas.org/index.php?threads/lz4-compression-decompression-performance.17284/
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I would not recommend compression for your Atom. It's very well known that the old-gen Atoms cannot handle saturating Gb speeds without compression. Best case speeds have typically been 50-60MB/sec. Compression is only going to make that much worse.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Your PCI card, is it a single lane (X1)? What is the exact model number of it? I suspect you are having a bottleneck issue. Reading actually takes longer because the data must be found and if you have four drives basically on a single SATA connector (speed/throughput wise). Writing will cache the data in RAM and smoothly place the data on the drives.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
I would not recommend compression for your Atom. It's very well known that the old-gen Atoms cannot handle saturating Gb speeds without compression. Best case speeds have typically been 50-60MB/sec. Compression is only going to make that much worse.

Is this "well known" information based upon LZ4 compression? LZ4 compression may be slower then gigabit ( Gigabit = 125 MB/sec) but decompression is way faster.

I don't have access to an Atom based system, but found a link [1] that reports the following LZ4 performance on an Atom Z530 (1st Gen - 1.6 GHz): 83 MB/sec compression, 304 MB/sec decompression. OP's processor is the D2700 running at 2.13 GHz - a 500 MHz increase.

83 MB/sec compression is still faster then the OP's reported write speed, and 304 MB/sec decompression handily beats gigabit speeds.

[1] - http://encode.ru/threads/1266-In-me...ppy)-compressors?p=25401&viewfull=1#post25401
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
I forgot to mention earlier -- sequentially reading large (media) files is basically the perfect use case for ZFS prefetching. Enabling prefetching should make a difference.

But ultimately this is just an intellectual exercise. The OP will want to have at least 8 GB of RAM to get 'official' support from the FreeNAS community.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So you think 83MB/sec compression *without* ZFS load, CIFS/NFS/iSCSI load, etc is going to work out well?

The *total* load for a FreeNAS box using ZFS without compression is already a major limiting factor (aka the 50-60MB/sec I mentioned earlier). Do you think it will work out to add even more loading?

Answer: No.
 

brashquido

Dabbler
Joined
Jun 30, 2014
Messages
17
Thanks to everyone again for their input.

@ Eraser - I have tried with and without lz4 enabled and for the most part performance is the same. I'll give the tunable you mentioned a try though and see how it goes. Even a 40MB/s sequential read rate would satisfy me for what this box is intended for.

@joeschmuck - I can't seem to find to much technical detail on the Vantec 6-port SATA II on the Vantec site. For the cost of it I wasn't expecting it to set speed records. Basic purchasing criteria was it had at least 4 ports and is based on a PCI interface. I have tried creating single disk ZFS pools connected to both the onboard ICH7 and the Vantec, and both present very similar performance. Write at around 55~60MB/s, read at around 15~20MB/s.

I'll try a few things over the next few days, but if I am unable to double read performance I might just look at re-purposing this hardware as a router/firewall or some such. Essentially I am after a low cost, low power solution I can run 24/7 to dump some files on. I have 2 x 8GB 240 pin DDR DIMMs with no home, so for the cost of something like the Biostar NM70I-1037U Intel Celeron 1037U I'd still have a 17W SoC with around twice the performance of the Atom D2700 that I can populate with 16GB of ram. I'd need to source another PCIe SATA controller, however with the extra onboard SATA I could even hook up an SSD (I also have an unused 64GB SSD atm) to use for L2ARC.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Forget L2ARC...

Yours is a very slow SATA card indeed, but that still does not explain the speed difference.

Did you try Eraser's suggestion?
Code:
vfs.zfs.prefetch_disable = 0


Please also find the answer to his question about the network chipset being used. If it is RealTek, then that is the answer (still try his prefetch suggestion though)...
 

brashquido

Dabbler
Joined
Jun 30, 2014
Messages
17
Network card is a Realtek RTL8111E. However as there is only a single PCI expansion slot on the board which is already populated by the Sata card, there is not much I can do about that. Just to clarify, exactly how would the Realtek card be effecting my system?

I've not edited that tunable yet, will do it when I get home tonight.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I doubt the Realtek card is causing that much of an issue but I'm certain it is a factor due to your slow CPU.

I think we need to see some actual benchmark testing results. Right now we are basing all this on simply what you are telling us, but no actual results from a specific benchmark. I've got to go to my day job so I don't have time to post which benchmarks, someone should be able to fill in or you could post your benchmarks if you have done them.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So far, it sounds like a combination of loads on a puny CPU in a worst-case environment.

  • The CPU is not fast enough for the average FreeNAS usage
  • There's not enough RAM
  • Realtek GbE controller requires the CPU to do most of its work
  • SATA controller is a legacy PCI card, which means it's sharing very limited bandwidth with all other PCI devices (I imagine the motherboard has a few more things hanging off the PCI bus)
 

brashquido

Dabbler
Joined
Jun 30, 2014
Messages
17
Well it seems it is good news for my FreeNAS box, not so good news for my main storage server. Appears that the LSI 9260-8i in my main server is on the fritz as I decided to do my FTP read/write tests from my the SSD in my PC rather than from my main server and all of a sudden without changing a thing my reads are sitting on over 50MB/s. So essentially it looks as if the RAID 6 volume on my LSI card is having write throughput issues (which is why it was appearing FreeNAS read speeds were down). Weird as I only just dumped 300GB worth of footage from a recent vacation onto my main server on the weekend and had no such issue. Best get onto that one before I have to restore from backup.

I turned on the autotune feature which made a few adjustments as below. Since enabling prefetch as well as these auto changes going in, performance has gone through the roof compared to what it was last night. Memory utilisation is not getting above 2GB and with the exception of the occasional spike the CPU stays below 60%.

Sysctls
kern.ipc.maxsockbuf = 2097152
net.inet.tcp.recvbuf_max = 2097152
net.inet.tcp.sendbuf_max = 2097152

Tunables
vfs.zfs.arc_max = 1G
vfs.zfs.prefetch_disable = 0 (Manually set as per advice in this thread)
vm.kmem_size = 1.37G
vm.kmem_size_max = 1.71G


While I was at it I decided to rerun my fairly unscientific test of comparing performance of the two disks connected to the onboard ICH7 controller (AHCI enabled) configured as a ZFS stripe to two of the disks on the Silicon Image (Sil3114 chipset) controller with the same ZFS configuration. Compression, atime and dedup off. Uploading/downloading the same 10GiB file via FTP, approx performance as follows;

ICH7
Write = 75MB/s (CPU 30%, 1.5GB RAM used)
Read = 88MB/s (CPU 55%, 1.5GB RAM used)

Sil3114
Write = 45MB/s (CPU 30% with one spike to 70%, 1.5GB RAM used)
Read = 47MB/s (CPU 20%, 1.5GB RAM used)

Indicates to me with enough conviction that the Sil3114 is not too flash, however I was pretty impressed with the numbers obtained by the ICH7 considering the system resources. Hopefully this is useful for someone else considering using FreeNas on a less than optimally spec'd server.

Big thanks to everyone again for their help. Obviously not going to break any speed records but even with the poor performance of the Sil3114 I think this will suit my needs.
 
Status
Not open for further replies.
Top