Hardare RAID 5 or ZFS

Status
Not open for further replies.

stuartsjg

Dabbler
Joined
Apr 14, 2015
Messages
18
Hello,

Not so much a query as such, just a for info/reference/discussion.

I used to have a 4-drive NAS (freeNas about 4 years ago + hardware 3Ware RAID 5) however once these 1Tb drives were full, i moved the drives back into a PC with a dell PERC card and 6 x 1Tb drives. I eventually replaced these for 2Tb 5900rpm drives but left them in the PC.

Main use of the drives is media storage and editing. As well as movies, TV and music, most data is irreplaceable images and video. Due to the irreplaceable stuff, backup of my stored data is to 2 online services (Carbontie and Livedrive) and also local offline hard drives (the old 1Tb units). Just working on resetting the online things to the mapped drive - will be fun!

Few weeks ago i decided i wanted to setup media sharing and go back to the NAS.

Base NAS is the same as i used all those years ago, a Dual Opeteron Rackable Systems with 4 drive bay slots. The Opteron 250's were charged to Opteron 280's a few months ago as the unit seen a few months service as a web development server by a colleague of mine. Each CPU has 2Gb ram, booting from 8Gb CF card in the IDE port.

I've installed a "new" to me 3Ware 9550-SXU-12 RAIDcard (ebay £40) upgrading the old 3Ware card which came with the server with 4 x 1Tb 7200rpm Maxtor on RAID 5 would only give me 50-60MB/s at the file system (seeing about 20-30% CPU usage during a read/write)

I was going to setup a 4 x 2Tb RAID 5 array but almost every forum or guide advocated using ZFS RAID and just mounting the drives as single disk or JBOD via the RAID controller. Everything was convincing do i done just that.

Once freenas was installed and setup with the software RAID/ZFS setup, i was seeing again 50-60MB/s file system. As i had nothing to loose, i scrapped that setup and went back to my original hardware plan and remounted etc and was then seeing 95-105MB/s at the file system (seeing about 5-10% CPU usage during read/write).

This is what i had hoped to see. The drives on the PERC unit internally would sustain me 80-90Mb/s but they did have a great burst speed.

Net result, hardware RAID 5 on the 3Ware card in the FreeNAS box is running faster than the PERC setup in my PC and is also faster than ZFS softwar RAID setup.

Wont be the same for everybody but i think ill stick with the hardware RAID setup for my system.

Points to note;
Many guides say to use software RAID for compatibility with any hardware, eg controller failure, system upgrade etc. Many of the same guides suggest to setup each disk as a multiple single disk RAID0 as opposed to JBOD to take advantage of the controllers cache.
This is hypocritical as, for example, a disk setup as single disk RAID0 on the 3Ware controller wont work on my PERC card (but it does work on 2 other 3Ware controllers i tested).
So, if you want to use software RAID as your worried about hardware compatibility in the future, you will need to use JBOD or native controller and not a multiple single disk RAID0 setup.

I had thought the software RAID was slow due to CPUs not being to speedy or modern, but as they were barely being used that's not really likely the issue?

I know the old rackable units are not perhaps the most efficient units power wise for a NAS, but when i cost up building a low power unit, i can pay for many years of incremental electricity usage by which time ill probably have replaced it. Depending on how we get on with Plex transcoding (see next post by moi) i may remove a CPU as this is how i used to run it (all RAM moved to the remaining CPU).

Hope this can be of some use/reference to some people.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
FreeNAS isn't for repurposed stone-age servers. Especially not those running on only 4GB DDR or DDR2 RAM -> slow bus speed.

Also don't use raidz/raid5 for important data. raidz2 is where it starts. performance is still well above Gigabit with a proper system.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
... Look, I don't mean to be a jerk here, but you're going to get ripped apart when the forum regulars see this. (You guys know who you are ... be nice, OK? It sounds like he's been led astray by bad blogs.)

You're running with 4GB of RAM, on a RAID card, with "5900rpm" - probably "green" - drives, and judging by the references to "single drive RAID0" have been fed some really bad info about proper configurations from well-intentioned-but-ultimately-incorrect blog postings.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Okay, so let's do this.

Many guides say to use software RAID for compatibility with any hardware, eg controller failure, system upgrade etc. Many of the same guides suggest to setup each disk as a multiple single disk RAID0 as opposed to JBOD to take advantage of the controllers cache.

Those guides are 1) wrong and 2) dangerous to your data.

You don't want a RAID controller anywhere near a ZFS filesystem because it will interfere with ZFS's ability to manage the drives. You already saw this with respect to the 3Ware/PERC controllers not recognizing each other's RAID0 setups (and why would they?) but it also has negative implications for both performance and reliability.

ZFS expects to be able to control writes to the disk, and a RAID card with its own cache will get in the way of that. If ZFS issues a "flush cache" command to what it thinks is a disk, the RAID controller might 1) ignore it, resulting in data not being committed right away 2) flush its battery-backed RAM, causing stalls and poor performance, or 3) one of the above randomly depending on how it feels, the phase of the moon, and whether or not it ate its Wheaties that morning.

You also won't be able to see SMART data from the disks, only what the RAID card exposes as "hey I'm healthy" or "hey I'm not."

I had thought the software RAID was slow due to CPUs not being to speedy or modern, but as they were barely being used that's not really likely the issue?

See above re: the silly things RAID controllers will do to the ZFS data path. You also have way too little RAM, 4GB is below minimum specifications and we just helped another user recover from a "not enough memory" situation which thankfully only caused a kernel panic and not massive data loss (we hope, he's still checking!)

Edit: Oh yeah, the 5900rpm drives. If those are regular "Green" drives that have a habit of spinning down when idle, you need to fix that. Otherwise they might be in standby when the system asks for data from them, and if they take too long to spin up, seek to the right spot, and deliver the data, the ZFS system or the RAID card - whichever you've got going on - might decide "whoops, this disk isn't responding" and mark it as offline/degraded. Bad news.

Hopefully we haven't scared you off here. Please understand I'm not saying this to be mean. Unlike the ones on YouTube, this Honey Badger does care ... about the reliability and safety of your data. ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yes, you're going to get ripped apart. In fact, I would, but I don't feel like it today. And to save the rest of the forum the hassle, the thread is now locked.

You've done so many things wrong, I don't know where I'd begin. But I know where it's ending. Right now.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Yes, you're going to get ripped apart. In fact, I would, but I don't feel like it today. And to save the rest of the forum the hassle, the thread is now locked.

You've done so many things wrong, I don't know where I'd begin. But I know where it's ending. Right now.

Hey now, I posted a good reply here to head this one off. Don't make me feel like I wasted all that time.
 

stuartsjg

Dabbler
Joined
Apr 14, 2015
Messages
18
Hello cyberjock

Was composing a reply before you closed down the post, perhaps you are able to post to the thread it for reference & completeness?
_________

All points very interesting and taken on board. The issue i was finding is its the numerous sources which advocate setting up the likes of a single disk RAID0 and then setup a software raid ontop of that which certainly appeared to be a bad idea and the proof being in the pudding of what works best and what doesn't.

The aim of my post was to perhaps act as a reference to others searching for the same thing as me and not finding it.

I think all posts were very valid to state not mix hardware and software raids - do it one way or the other but not both (there will no doubt be circumstances why one would want to do this) but that seams to be the general jist of things. Plan is to add another 2 drives to the larger enclosure and convert to raid 6 as many moons ago it was scary during a rebuild of a drive caused by a dislodged cable. I don't like online regrades so it will likely be a complete rebuild.

One of the whole points many "setup your own nas" pages & posts try to achieve is rescuing old hardware and re-purposing as opposed to scrapping. As im running low on disk space, i need a larger rack mount case which is already on order as it was a good buy. Recommended boards are likely out of price range (else id rather just buy an off the shelf NAS unit perhaps!) but there are many to choose from.

The "green" drives were bought against my better judgement at the time but turned out to be fine for my needs. Never known them to spin down and they have been happy with the perc unit for about a year and i read they are known to be fussy.

Feel free to close the page if your wish, its served its purpose in hopefully warning others against mixing hardware and software raid with some file system bench-marking for reference.

Thanks,
Stuart
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
One of the whole points many "setup your own nas" pages & posts try to achieve is rescuing old hardware and re-purposing as opposed to scrapping.

I've seen these posts crop up on LifeHacker and other blog circles, and every time I see one I have to post "STOP. NO. NOT WITH FREENAS/ZFS YOU DON'T."

You can certainly rescue old hardware and give it new life as a NAS. However, that new life may be as a Linux/ext4 machine, because it's nowhere near beefy enough to run FreeNAS/ZFS in a stable and safe manner.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
One of the whole points many "setup your own nas" pages & posts try to achieve is rescuing old hardware and re-purposing as opposed to scrapping.
That's nas4free, OMV, xpenology, that lot.

FreeNAS is an enterprise-deriven ZFS-only storage platform which needs good hardware to work well. There are lots of threads around in this very subforum telling you to leave that old clunker in the shed and buy a new-ish server system.
 

stuartsjg

Dabbler
Joined
Apr 14, 2015
Messages
18
I did stat with nas4free but i found FreeNAS being much better developed was installing trouble free so was the sensible route for me.

Good hardware is the plan, but i wanted to reprove that going back to a NAS was the route i wanted to go down and it certainly does appear to. I work on the theory, if it does what i want with the old clunker (i may change the server name to this!) then spending some pennies (not the urban dictionary definition of the phrase) on bringing things up to date is what i need to do - more so if any issues (such as my plex one) are attributed to things being vintage currently.

Cheers for the pointers.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, you're going to get ripped apart. In fact, I would, but I don't feel like it today. And to save the rest of the forum the hassle, the thread is now locked.

You've done so many things wrong, I don't know where I'd begin. But I know where it's ending. Right now.

That didn't exactly stop the thread. It just made it a mess. I've fixed that, more or less.

Please do remember that some of us don't mind being a little gentler with the newbies, and if you don't feel like it today, let others shoulder the load. I understand that it's a thankless and repetitive task, but that's why it's great to have many competent posters helping out.

Note to OP: I think you already know your described "FreeNAS-9.3 running on Rackable Systems unit with 2 x Opteron 280 + 2Gb RAM/CPU = 4Gb RAM total. 3Ware 9550-SXU-12 card with RAID 5 Hardware setup (64kB stripe) with 4 x 2Tb 5900rpm drives" is a disaster waiting to happen, but I'll repeat it just to be sure.
 
Status
Not open for further replies.
Top