CPU effect on freenas performance? Best low powered CPU for home use?

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I get defensive because every one of these threads ends one of three ways:

1. Something is broken with the hardware
2. Someone thinks they can tweak the system to make it even faster(even though they have sufficient hardware and good performance) and they kill their server performance because you can't do some bland tweaks and get more performance for free. If it worked for most people, it would be the default. I know, shocker, but its true.
3. Someone really thinks that "their" system will actually use a tweak and eek out extra performance. Everyone that falls into this group always ends up with the same answer... /sigh. Nonetheless we get these threads every 2-3 days anyway.

I got nothing in terms of tweaks. Based on other people with less than 4GB of RAM, the system may become unstable at any time, or maybe never. If it becomes unstable your data may be at risk. It's not an issue that can be fixed with some tweaks. You are literally below the minimum hardware requirements. Windows 2000 was smart in that it give you a big fat warning (in XP it wouldn't even let you install it with <128MB of RAM!) and I'm not sure about Vista/7/8 since I'd never consider using any of those with less than far more than the recommended(not the minimum). It really disappoints me that you even admit to reading my guide but still ignored all 3 warnings that lots of RAM is necessary for ZFS. I created the guide to help people NOT screw up the stuff that turns into a new thread every other day. Yet people ignore my guide, the guys that read the manuals, and then have a shocked look on their face when it doesn't work out. Surprise, the guys that invented the OS know exactly what you need. They aren't out to keep the hardware manufacturers in business. I created the guide to save people time and money(my time as well as your time and money) but it fails so often it is a disappointment. There's a reason why I don't bother posting my other presentations... too many people ignore them.

Your only real option is to switch to UFS, which may or may not help enough(that CPU is mega old) or buy hardware. Even a used first gen i3 system with 8GB of RAM would be a major major step up. The i3 system I manage for a friend can saturate 2x1-Gb LAN ports simultaneously. If money is an issue I'd look for an old i3 desktop someone doesn't want, upgrade it to 8GB (or 16GB) for cheap and enjoy your new ultra fast server. I have zero experience with AMD CPUs, so I can't recommend an AMD equivalent, but I'd think any AMD CPU with 4 cores that supports at least 16GB of RAM will be fine. Higher Mhz is better for CIFS.

I'm really not sure why people keep thinking that hardware that is literally a decade old(wikipedia lists the release date for that CPU as 2003!) can do anything useful with current OSes is a little crazy. I'd be scared to even run XP SP3 on it because of all of the overhead from adding SP3.

Also, I'm not sure why you necro'd a thread that is a year old...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I didn't answer your question on the apples-to-apples comparison. There really isn't one out there because nobody really wants to spend the time and money to do the comparison. I would suppose that most people wouldn't try to use a server with hardware that is more than 3-5 years old without it already being in production. Even 3 year old hardware gets dirt cheap to acquire via Amazon and Ebay.

My personal "dummy test" to determine if the CPU/motherboard should work for ZFS well: "Does it support 16GB of RAM minimum?"

If it can't use(note I said used, not installed) 16GB of RAM, don't expect stellar performance. My first-gen i3 can give stellar performance. I'm talking 120MB/sec+ on both NICs at the same time. But if you look at the Intel Atoms(they can't use more than 4GB or 8GB depending on the model) then don't expect stellar performance. They can do 25-50MB/sec depending on load type, CPU and NIC used. They are very low power though!
 
Joined
Apr 21, 2013
Messages
5
I have to reply to RJOD.

Can you tell us the exact specifications of your hardware? I ask because I read this thread twice and couldn't find it.

I have:
- Supermicro Atom - http://www.supermicro.com/products/motherboard/ATOM/945/X7SLA.cfm?typ=H
- 2gb of ram.
- LSI 4x SAS HBA
- Intel Dual-Server Ethernet
- 6 WD Black 1.5tb

I get 67MB/s read and 60MB/s write over CIFS and similar performance over iSCSI. This seems to be the cpu limit as Solaris benchmarks of my zfs pool yielded 400MB/s read and 270MB/s write. If I remember correctly AFP performance was something like 85MB/s read and 60ish write.

If you are getting worse numbers than I am then 1) you have a problem with 1 or more disks in your array, 2) something is wrong with the system board/cpu/hba because I have probably the slowest board possible at this point.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I have to reply to RJOD.

Can you tell us the exact specifications of your hardware? I ask because I read this thread twice and couldn't find it.

It could use clarifying, but I think it was in this post:

Well any cpu as fast as a 3200+ A64 solo core should do.
I'm running that with only 2 gigs of system ram and getting 38mB/sec to and 50mB/sec from RAID 1TB raid 1.

I don't have prefetch enabled due to only 2 gigs of ram but its much faster than my 1TB WD worldbook NAS. That one only gets like 12mB/sec over same network.

You guys trying to squeak by with 2GB of RAM are just asking for an "accident".....

I'm keeping a list so when you come back saying "OMG, my NAS took a dive and I lost everything, help me!" I can just click next.... :rolleyes:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm keeping a list so when you come back saying "OMG, my NAS took a dive and I lost everything, help me!" I can just click next.... :rolleyes:

I do too. it's called the "ignore list". :)

And seriously, this thread was necro'd from April 2012! Why are we even still chatting in this thread about anything?
 
Joined
Apr 21, 2013
Messages
5
I'm keeping a list so when you come back saying "OMG, my NAS took a dive and I lost everything, help me!" I can just click next.... :rolleyes:

Mine consists of 3 mirror sets. There is no magical flaw in ZFS that will cause it to destroy your drives if you don't have a ton of memory. Having memory is a performance operation, not a functional requirement. As long as you have server grade parts and ECC memory, everything will be just fine. All writes to ZFS are synchronized unless you went and disabled it. Even if you do disable it that only means that the mirror writes are not synchronized and the write itself (from an integrity standpoint) is synchronized. You can't lose data unless you unplug the damn thing. Having a SSD write cache is the only added protection against hardware failures mid-write.

I've run this same piece of hardware for almost 4 years with Solaris, OpenSolaris, OpenIndiana (when solaris was discontinued) without any problems other than drive failure.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Mine consists of 3 mirror sets. There is no magical flaw in ZFS that will cause it to destroy your drives if you don't have a ton of memory. Having memory is a performance operation, not a functional requirement. As long as you have server grade parts and ECC memory, everything will be just fine. All writes to ZFS are synchronized unless you went and disabled it. Even if you do disable it that only means that the mirror writes are not synchronized and the write itself (from an integrity standpoint) is synchronized. You can't lose data unless you unplug the damn thing. Having a SSD write cache is the only added protection against hardware failures mid-write.

I disagree. People that have insufficient RAM have problems with kernel panics. Sun recommended no less than 6GB of RAM when ZFS was first released. The reason for this was that ZFS was tuned to use RAM responsibly if you have ≥6GB of system RAM. Notice they made no comment for less. Now I have not found any reason to suggest this should be any different for FreeBSD, but NAS4Free seems to play better with less RAM. Anyway...

So here's why kernel panics are very bad for ZFS. ZFS works like a transactional database. Anyone know what happens when a transactional database is turned off during a write that is in progress(and some even if a write isn't in progress)? Yep. You're running tools to try to fix the corruption before you can use that product/service again.

For SQL you may not be able to use the database until you run some corrective tools, and at the potential loss of data.

For Microsoft Exchange Stores you may not be able to use the store until you run eseutil, and at the potential loss of data.

For ZFS, you may not be able to mount the zpool until.. oh crap, there is no tool to run! Remember you can't scrub a pool until its mounted. Better go to backups!

So while people complain about performance with 4GB of RAM(or less) regularly I don't really care about that too much(aside from the fact that the forum is filled with them now so searching is difficult). It's spelled out in the manual that performance will be less than stellar. My problem is that if the system is kernel panicking you run the very real risk of having a zpool that won't mount, and you have no tool to run to fix it. If it doesn't mount you try a bunch of command line arguments and then give up. And judging from the number of people that do backups on the forums, I'd bet less than 20% of forum users actually have a good solid backup of their zpools at any given time(it's probably less than 5% realistically). Additionally, if you don't use UPSes and one server goes down from a loss of power, you can bet the other just did too. So now you have 2 zpools with your data, but there is the slight chance that neither will be mountable.

Now consider these people that don't know all this "insider info". They're gonna be pissed that they just lost their family pictures, wedding pictures, tax documents, and other personal stuff that was irreplaceable. Do you think they're going to turn around and spend 100s of hours doing tons of homework to figure out why everything went to crap? They didn't do their homework and may not have chosen to use enough RAM(the forum has lots of users that have lost everything from kernel panics) and they may not have chosen to use a UPS(the forum has lots of users that have lost everything because of a loss of power that day), so explain to me why you think they're gonna care even more now that they have no data to protect and they're fuming mad about having no data. They're going to point blame at FreeNAS, call it a crappy project that shouldn't be trusted with data, and go somewhere else. And yes, people HAVE created threads that said "I just lost everything.. FreeNAS isn't ready for primetime".

You know what my biggest fears are with my FreeNAS server? In order...

1. Kernel panic
2. Loss of power
3. Failure of multiple drives in quick succession(obviously faster than I can replace them)
4. PSU failure that destroys enough drives to prevent recovery on a new machine.
 
Joined
Apr 21, 2013
Messages
5
You know what my biggest fears are with my FreeNAS server? In order...

1. Kernel panic
2. Loss of power
3. Failure of multiple drives in quick succession(obviously faster than I can replace them)
4. PSU failure that destroys enough drives to prevent recovery on a new machine.

I don't understand the correlation of Kernel panics to quantity of ram. This just boggles me since swap space and years of kernel development should handle everything just fine.

A good way to mitigate #3 is to use varying drive versions / manufacturers. This may slow down the speed of the array but provides a variance in firmware and hardware so multiple drive failures won't easily happen for the exact same flaw.

Doing it right costs a lot of money. Many probably come here looking for cheap and easy. They don't realize that doing it properly might cost upwards as 60-70% of an enterprise Drobo. In theory, everyone should have an HBA that has a built in power backup.

An interesting idea - that the overall likely hood of major catastrophic meltdown during a power outage increases with the amount of ram due to the huge in memory write buffer. I remember this from iSCSI where while a write buffer increases write performance, it decreases overall stability in case of a hardware / power failure. These things are all complex technologies that shouldn't be taken for granted.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't understand the correlation of Kernel panics to quantity of ram. This just boggles me since swap space and years of kernel development should handle everything just fine.
I won't pretend to know why this is the case. I just know that by far the majority of kernel panics include a comment about memory allocation error related to insufficient RAM. I have yet to see an insufficent RAM kernel panic message in the forum with anyone that had 6GB+ of RAM. Allegedly ZFS tries to use 7/8th of the total system RAM for itself. For a system with 4GB, that's just 512MB for everything else. I'm not to familiar with memory management for FreeBSD, so I'll leave that to someone else to explain.

A good way to mitigate #3 is to use varying drive versions / manufacturers. This may slow down the speed of the array but provides a variance in firmware and hardware so multiple drive failures won't easily happen for the exact same flaw.

It is, but then you also have a higher risk of having a particular firmware or hardware version that does have an error. I've seen both problems. What do I use at home? I bought all my drives from the same vendor at the same time.

Doing it right costs a lot of money. Many probably come here looking for cheap and easy. They don't realize that doing it properly might cost upwards as 60-70% of an enterprise Drobo. In theory, everyone should have an HBA that has a built in power backup.

But an enterprise drobo doesn't have ZFS, doesn't have the same potential for great performance, the drives aren't able to be dropped into any box later(you MUST have another drobo), etc.

An interesting idea - that the overall likely hood of major catastrophic meltdown during a power outage increases with the amount of ram due to the huge in memory write buffer. I remember this from iSCSI where while a write buffer increases write performance, it decreases overall stability in case of a hardware / power failure. These things are all complex technologies that shouldn't be taken for granted.

On my systems, even when I do things to deliberately fill the zpool write cache, it only takes 1-2 seconds tops to write that data. ZFS typically flushes to disk every 6 seconds, which typically is capped by Gb LAN, which means at best, about 800MB. ZFS does tweak when it flushes and other things based on system performance tests it runs on bootup. They aren't 100%, but they're a good guestimate without doing very intrusive tests. Besides, if you are using an UPS, the whole issue of loss of power should be limited to situations where you have a loss of power and the UPS fails at the same time. This is why most UPSes run their own tests weekly. It's not exactly a secret that all servers should have a UPS. That thumbrule wasn't for uptime entirely, it was to protect the server.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
The CPU on an E350 will quickly become the bottleneck for LOCAL disk access.

On my 6 disk RAIDz2 array I'm able to hit 480MB/s in local tests. The E350 will not allow this to happen.

That being said, you don't need to be concerned with this, because local disk access is presumably not why you are building your NAS. Because of this, you are going to be limited to gigabit speeds (128MB/s theoretical, ~80MB/s in practice with SMB, a little more with NFS.

Because of this you don't need to be too concerned with the CPU.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The CPU on an E350 will quickly become the bottleneck for LOCAL disk access.

On my 6 disk RAIDz2 array I'm able to hit 480MB/s in local tests. The E350 will not allow this to happen.

That being said, you don't need to be concerned with this, because local disk access is presumably not why you are building your NAS. Because of this, you are going to be limited to gigabit speeds (128MB/s theoretical, ~80MB/s in practice with SMB, a little more with NFS.

Because of this you don't need to be too concerned with the CPU.

That's a pretty hefty argument to make. In my thread discussing AES encryption performance, even a top of the line Xeon can cap at just a few hundred MB/sec, ignoring all other overhead from ZFS, CIFS, etc. And that's a CPU that supports AES-NI. The low powered CPUs generally don't, and you might be lucky to hit 100MB/sec.

Also, you may care about scrubbing performance for large arrays. My 18x2TB RAIDZ3 gets about 900MB/sec+ when scrubbing. Even at those speeds it takes me almost 18 hours to do a scrub.

What you really need to do is consider what you plan to use the machine for both now and in the future and plan accordingly. If possibly make sure to get a motherboard that allows you to drop in a CPU that supports AES-NI, more cores, etc so if you are forced into upgrading later you can do it by dropping in a new CPU.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
That's a pretty hefty argument to make. In my thread discussing AES encryption performance, even a top of the line Xeon can cap at just a few hundred MB/sec, ignoring all other overhead from ZFS, CIFS, etc. And that's a CPU that supports AES-NI. The low powered CPUs generally don't, and you might be lucky to hit 100MB/sec.

Also, you may care about scrubbing performance for large arrays. My 18x2TB RAIDZ3 gets about 900MB/sec+ when scrubbing. Even at those speeds it takes me almost 18 hours to do a scrub.

What you really need to do is consider what you plan to use the machine for both now and in the future and plan accordingly. If possibly make sure to get a motherboard that allows you to drop in a CPU that supports AES-NI, more cores, etc so if you are forced into upgrading later you can do it by dropping in a new CPU.

Fair enough. I forgot about encryption. I've never used it and have no experience with it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, encryption seems to be a major killer even with AES-NI instructions.
 
Status
Not open for further replies.
Top