FreeNAS performance advice on new system

Status
Not open for further replies.

pwanage

Cadet
Joined
Jul 13, 2012
Messages
5
I just built a FreeNAS box and it seems to be working well, though I'm noticing some things that I'm not happy/concerned about. Before I go out and blow more money on this I wanted to get some opinions.

This is my current config:

Case: Lian Li - PC-Q08B
Mobo: Jetway - JNF99FL-525-LF D525 ICH9R
PSU: 500W ATX
RAM: 8GB
HDs: 6 x Seagate SV35.5 ST1000VX000 1TB (formatted in ZFS w/o deduplication and compression)
Boot: OCZ Rally2 4GB

These are my primary uses:
Media Server - music and movies (std and HD)
General file storage
Streaming media

My main system is a Mac though I do use Windows. I've been moving large files back and forth and have noticed that both AFP and CIFS will eat the CPU after a few seconds of transfer. It normally stabilizes at 70% - 80% WCPU with ntpd and avahi-daemon each eating about 50% WCPU.

The other issue is that I can only hit 50MB/s for my network transfer. I have the NICs aggregated on a gigabit switch. I'm thinking that the NIC type might be the issue and that I don't have enough resources to reach higher speeds.

What do you guys think. Do I need better hardware or just some tweaking? If I need to upgrade what processor would you go with? Any other input would be helpful too.

Thanks
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi pwanage,

I think, to be blunt, that you need better hardware.

Do a search on the forums here for other Atom-based setups to get some idea what others are getting out of their systems....I think you will find that 50MB/s is actually a pretty good result from such a setup. Odds are bonding the NICs isn't doing anything for you as you shouldn't really be able to move enough data to max even a single gig-e link out.

I think the biggest problem you have might actually be the drives. The "SV" series of drives is made to store video, not data....the sort of setup where there is a constant stream of data coming in and a few incorrect bits here & there won't matter to much, but pretty much the opposite of what you would want from something serving data.

I'm kind of surprised that CIFS is burning up a core. I know it likes the MHz, but I didn't think in a home environment it would actually work a proc like that. Ntpd certainly shouldn't be putting that sort of load on your system.

As for upgrading, right now I'm on kind of an Intel i3 kick. I really like the "baby" Sandy Bridge chips...enough that I'm picking one up this weekend as part of my project to replace the motherboard, proc & memory in my own filer. I run an add-on LSI SAS controller so I need a board that will work with an 8x PCI-e card so I went with a Supermicro mATX server board.

-Will
 

pwanage

Cadet
Joined
Jul 13, 2012
Messages
5
I did some searching about Atom processors and found mixed reviews, though many people seem to be going with full desktop solutions.

I can't imagine that disk would be a real big issue. The bus is SATA 3Gb/s and the drives are SATA 6Gb/s. I found a forum post (below) and decided to check the disk performance.

http://forums.freenas.org/showthrea...rformance-Benchmarks-and-Cache&highlight=Atom

Write:
Code:
dd if=/dev/zero of=/mnt/storage/tmp.dat bs=2048K count=50K
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 696.022056 secs (154268362 bytes/sec)


Read:
Code:
dd if=/mnt/storage/tmp.dat of=/dev/null bs=2048K count=50K
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 324.199742 secs (331197618 bytes/sec)


I noticed one common thing for Atom processors and that was that hyper threading seems to have a performance effect. To see if this had an effect I ran the test again with it turned off.

Write:
Code:
dd if=/dev/zero of=/mnt/storage/tmp.dat bs=2048K count=50K
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 783.457388 secs (137051720 bytes/sec)


Read:
Code:
dd if=/mnt/storage/tmp.dat of=/dev/null bs=2048K count=50K
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 384.470694 secs (279277937 bytes/sec)


Seems Hyper Threading helps in my case though it doesn't appear by that much.

I also remember that these were AF (Advanced Format) drives. So I tested again with a 4k block size and with HT turned back on.

Write: (I killed this early)
Code:
dd if=/dev/zero of=/mnt/storage/tmp.dat bs=4096K count=25600K
^C49263+0 records in
49262+0 records out
206619803648 bytes transferred in 1357.766319 secs (152176262 bytes/sec)


Read: (I killed this early)
Code:
dd if=/mnt/storage/tmp.dat of=/dev/null bs=4096K count=25600K
^C31865+0 records in
31865+0 records out
133651496960 bytes transferred in 417.360335 secs (320230472 bytes/sec)


Point being, I don't seem to be that far off in drive performance compared to others with similar configs. At least that's my two cent assessment. It seems like processor is having a lot to do with it, along with the SATA bus, correct?

Also unless there is something wrong with my system I should be able to saturate a nic. Would this be to the drive limitation you mentioned? With those results however, I shouldn't have an issue unless the processor wasn't keeping up or there were some driver issue with the NIC?
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
Good idea benchmarking hyper-threading.

U might want to play around with iperf, UDP has less overhead for benchmarking to see how close you can get to saturating a NIC.
 

pwanage

Cadet
Joined
Jul 13, 2012
Messages
5
Okay just finished up some iperf testing, and upgrading the FW on my Cisco SG200-08 switch.

Test 1 Results TCP:
1 parallel streams for 200sec "Type of service = Throughput" rest default settings.
75.3 MB/s

Test 2 Results TCP:
2 parallel streams for 200sec "Type of service = Throughput" rest default settings.
40.3 MB/s + 41.1 MB/s = 81.4 MB/s

UDP results are horrible, .06MB/s.

I also moved some more large files via AFP after the switch FW upgrade. No change, I think I might be hoping for a little much there. ;)

Given this information it seems that the bottleneck is the processor. What do you think?
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
interesting results. It seems network is the bottleneck, I wouldn't jump to say CPU, unless you got data saying the cpu was at 100% during your iperf. But I bet your guess isn't far off.

Weird UDP did so bad. When I find some time, I'll test on my low powered FreeNAS box and see what I get. Highest I've ever gotten on file transfers is 75-80MB/s but never tried iperf on it.
 

pwanage

Cadet
Joined
Jul 13, 2012
Messages
5
Well those are just packets not going thought afp or cifs. I'm not sue if the method/protocol would have such a large negative affect but that's my guess, as it seems both cifs and afp can be CPU intensive. I can rerun the iperf test and check the CPU. I'll be back with those results. In prep for the worst case scenario...changing out the hardware this is what I've come up with. I'm really hoping that I don't have to swap hardware.

Supermicro x9scl+-f
2x8gb ram ecc unbuff
Intel i3-2120 3.3GHz
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Are you still running the NICs aggregated? If so unbond them and plug one into a normally configured port and see what you get. Those UDP results are bizarre.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi pwanage,

That's almost the exact same system I'm having delivered this week, only difference is I'm getting an i3-2100 (gotta love those Microcenter processor prices). I don't know how many more times faster the i3-2120 is....double, triple or more but I suspect the updated hardware will leave you pleased with the performance.

One suggestion. 8GB unbuffered ECC DIMMs are still fairly expensive. If you plan on only using 16GB save the $60.00 permium and just get four 4GB. Either way I like Superbiiz.com for memory, 8GB ECC UDIMMs run ~$70-90 bucks there depending on brand. Be sure to check the Supermicro memory compatibility list found here: http://www.supermicro.com/support/resources/mem.cfm to be sure your choice of DIMMs is supported.

I use one of these boards for an ESXi Server (actually the same one you picked) and I really like it.

-Will
 

pwanage

Cadet
Joined
Jul 13, 2012
Messages
5
Reran the iperf, and eats at most 80% but hovers about 60% - 70% running just one tread for 200sec.

I deleted the aggregation and ran iperf and a simple file transfer. iperf still eats CPU and same results as before (70.1 MB/s.) AFP performance was the same thought there does seem to be a lot more variation in sustained speeds. Seems to jump a lot more between ~40 MB/s and 20 MB/s. The highest I was able to get before was ~50 MB/s now I can't break 41 MB/s.

Oh my UDP results were my fault I think. There is an option for UDP Bandwidth that I left at 1 MB/s at first I found that bumping that to 300 yielded 30 - 40 MB/s. I'm guessing that even higher amounts will yield faster rates to a degree.

Yeah EEC RAM is $$$$$. I fond those sticks on newegg for it think $70 -$80 I think they were kingston not too bad I would think.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
Tested my TCP connection with iperf.

OS = FreeNAS 8.2 x64
CPU = e-350
NIC = Realtek 8111

connected to..

OS = Ubuntu 12.04 x64
CPU = i5 2500
NIC = Realtek 8111

Got 923Mbits/sec

Guess my bottleneck is my Desktops 7200rpm HDD on my Desktop
 
Status
Not open for further replies.
Top