Considering ZFS But Have Some Questions

Status
Not open for further replies.

Toadlips

Dabbler
Joined
Jan 29, 2015
Messages
20
Good afternoon all,

Just to give you an idea where I'm coming from, I stumbled upon FreeNAS and ZFS while looking into a way to checksum and verify the integrity of my archived data (home/personal use -- maybe 6TB total if I give plenty of room for expansion). My thought was that I would either create or use an existing solution to put text files with MD5 checksums in every directory that has "important" documents in them. That way, I could set up some automated task to read the MD5 files, check the files, and let me know if there was any kind of discrepancy.
From what I've read so far, ZFS does this and more automatically as part of its normal operation. This sounds really good to me, but then I've read some other things that make me nervous. Let's just get this out of the way -- I'm convinced that my NAS build will include ECC RAM, so no worries there. However, in my above "manual MD5" scenario, I don't think bad RAM would be as likely to threaten the entire store. Bad RAM would be discovered during the verification process, or maybe while writing a file then trying to read it back, but it would be less likely to do more damage than that because it lacks any kind of self-healing feature.
It makes me wonder if there are any other types of scenarios, aside from bad RAM, that would be potentially more damaging than it would be on a traditional file system, such as power failures, a bad software update, etc. A bad software update is (hopefully) unlikely, as long as I stick with a stable release that's been in use for a while, but I could see some power failures happening. Is ZFS less resilient than other file systems in this respect? Is it really "all or nothing" if some areas of the disk have been compromised?
A lot of what I read says, basically, that the NAS should not be your only backup (i.e. "You do have backups, right?"), but the truth is that this will be my backup for files that are on my computers at home, and I'd like it to be reliable. If ZFS is as likely to fail as a "normal" filesystem like NTFS/EXT3/4 due to power issues, etc., but there are no tools to recover the files from a corrupted ZFS store, then I'm thinking that I might actually be better off with a more traditional filesystem since I cannot guarantee completely against power failures.
Thanks for any insight into this!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You're going to be *very* hard-pressed to do that kind of analysis for other failure modes. The reason is that the way ZFS works on FreeBSD and the way that file systems are accessed and used on Windows with NTFS are totally different. So unless you can find someone that is an expert at the internal workings of both OSes AND their file systems, you're not getting useful data.

You are correct that NTFS is less susceptible to bad RAM, but it is not impervious. Every time I've seen bad RAM in a system the file system was totally trashed and useless for both ext3 and NTFS. No doubt things such as a SAS/SATA controller going haywire, bad firmware on the SAS controller, and other things can totally destroy your data in ways that are unrecoverable. This is precisely why backups are important.

If you are going to forego backups things get complicated because:

1. EXT(x) and NTFS don't validate their data on their own, nor do they provide any ability to rebuild themselves to 100%, so a few bytes out of place on those can devastate all of your data as they don't have the parity and checksum reconstruction that ZFS has.
2. ZFS can pretty much defend itself from corruption 100% of the time with one exception; corruption that exceeds the ability for ZFS to correct. Things often go terribly wrong when you hit this spot, and ZFS may end up unable to access your data at all. If this happens you've got 100% data loss since there are no ZFS recovery tools.
3. Things like a PSU that overvolts the components, failed SATA/SAS controller, etc do happen and it will wipe out your data on a permanent basis regardless of your choice of file system. Recovery from these can be quite expensive. Backups are super cheap (literally a steal) when you have to do this kind of recovery.

All that being said, I'm using ZFS for data that I have no backup of. I ditched NTFS after having small amounts of corruption that lead to large quantities of data loss. If you want to go this route with ZFS and without a backup I'd definitely do RAIDZ3 and follow all of our recommendations to the letter to ensure the highest chances of having a pool that you will have for many years. But remember... RAIDZ(x) is not a substitute for backups and you are still taking some risks.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ZFS is designed to be more resilient than traditional filesystems.

A traditional filesystem that reads and writes data through a RAID controller with faulty RAM will quickly shred your data and not even notice.

ZFS, being your RAID controller, is smart enough to notice. However, it is also designed with an underlying assumption that the computing platform is trustworthy. Omitting things such as ECC memory compromise the integrity of the host system. For systems like Windows, this merely means a blue screen of death and with a reboot you might be OK or you might not, depending. With ZFS, though, it could potentially mean that bad data is written to the pool, which is bad, because ZFS is complicated enough that there aren't many recovery tools. If you damage the pool, it could be that the only option is to copy the data off the pool, destroy and recreate the pool, and reload the data.

For a properly designed ZFS system, it is incredibly unusual for this to be an issue. However, many home users come in here and try to shoehorn ZFS onto some random collection of crap they already have, often including non-ECC systems and/or hardware RAID controllers, and in doing so, that's bad.

A properly designed and configured ZFS system is more likely to be resistant to damage than an NTFS or ext3/4 system. A poorly designed ZFS system is at least as risky as a poorly designed NTFS or ext3/4 system, probably moreso, and the "moreso" is because the person deploying a poorly designed ZFS system assumed it was some sort of computer magic. It isn't. It's computer science, applied.

That said, there's still risk. Bugs: It's software. A bad line of code is a hazard. Unexpected and unhandled conditions are hazards. A healthy skepticism is just that ... healthy. And hardware can spectacularly fail and toast it all. Think off a bad power supply unit.

Lots of people are using ZFS so the likelihood of a bug killing your pool on a well-designed system is relatively low. We don't see that happen in practice. We do see poor design kill pools. Use a RAID controller which hides drive failures from the FreeNAS host: Fail. Don't bother setting up SMART monitoring of your hard disks to get proactive notification of problems: Fail. Don't use ECC and probably also fail to burn in your system to detect bad RAM: Fail.
 

Toadlips

Dabbler
Joined
Jan 29, 2015
Messages
20
Thank you both for your insights. I'm probably overthinking this and focusing on details that really don't matter in the grand scheme of things. You're both right -- there are other modes of failure that are just as likely to happen, such as a surge or bad component that takes the entire array out regardless of the file system. Technically speaking, I should have a backup of all the important files on other computers. They're in the same house, though, but that's another problem. ZFS seems to fit the bill with the guarantee of archive integrity, and that's the most important thing to me.

My main concern now is "how many drives" and what configuration. I don't want to get too crazy, as I don't have an unlimited budget. I've been looking over the guides and now I'm going to put together a list of hardware and compare total cost per GB and try to figure it out. 3TB WD Red drives seem to be hitting the sweet spot at $38/TB, with the 4TB a close second at $41/TB. I'll probably harass you guys a little more once I come up with some hardware and configuration options.

Thanks again for your help!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, the 3TB drives are usually nonsensical. Recompute your costs adding in the cost of the entire NAS platform, i.e. the "cost of per usable TB delivered."

See for example the discussion thread at https://forums.freenas.org/index.ph...tb-in-raidz2-considerations.17918/#post-97210

Your base system without drives will probably end up costing $500-$800. That ends up swamping minor price differentials between the drives. Let's lowball it at $500 and use RAIDZ2:

4 x 3TB @ $110 each is $440, total system cost is $940 for a 6TB system, or $156/TB.

4 x 4TB @ $160 each is $640, total system cost is $1140 for an 8TB system, or $142/TB. <<== Low price per TB winner
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
Best price per TB that I've seen now is 6TB disks which is $33.33/TB


There are also the 8TB for $250-260 drives for $32/TB, but those are the new shingled disks and write performance will not be equal to traditional perpendicular recording disks so you have to consider if that is OK for your usage.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
While you are creating your budget, DO NOT forget a UPS. Power failures are simply improper shut downs
and can/do lead to data corruption.
 

Toadlips

Dabbler
Joined
Jan 29, 2015
Messages
20
Thanks for all of the tips & suggestions, guys. I already have a decent case and a good UPS, so I should be OK there! Here's my proposed system:
Intel Core i3-4160 @ 3.6GHz (104.94)
SuperMicro X10SLL-F (183)
EVGA 600B 600 Watt ATX Power Supply (44.99 after rebate)
Crucial CT2KIT102472BD160B 1600MT/s (PC3-12800) DR x8 ECC UDIMM (169.99) (Crucial's website says it's compatible with the X10SLL-F)

That's $503 without hard drives.

The best price I can find for WD Red drives are: $114/3TB and $160/4TB. I plan on using RAIDZ2, so...
If I do 5x3TB drives=9TB, total system cost is $1068.35, or $118.71/TB
4x4TB drives=8TB, total system cost is $1138.31, or $142.29/TB

So at 5+ drives, the 3TB are my better $/TB value. Are there any benefits to using less disks that I may not have thought of? I was considering going for broke and getting 6x3TB=12TB, total cost is $1182.35, or $98.53/TB....

At 6 drives, I wonder how much that would increase the potential failure during rebuild, etc.

I'm about to start buying some hardware! Then, I need to read some manuals! ;)
 
Joined
Mar 6, 2014
Messages
686

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
That's $503 without hard drives.

The best price I can find for WD Red drives are: $114/3TB and $160/4TB. I plan on using RAIDZ2, so...
If I do 5x3TB drives=9TB, total system cost is $1068.35, or $118.71/TB
4x4TB drives=8TB, total system cost is $1138.31, or $142.29/TB

but if you do 5x4TB drives=12TB, total system cost is $1298.31, or $108.19/TB

so it even depends on the disk space you want ;)
 
Joined
Mar 6, 2014
Messages
686
5 drives for RAIDZ2 is not reccommended. Read this.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
5 drives for RAIDZ2 is not reccommended. Read this.

That info is out of date. If you are using compression (which is defaulted to on with lz4, and lz4 is basically a free lunch) that thumbrule goes out the window.
 
Joined
Mar 6, 2014
Messages
686
That info is out of date. If you are using compression (which is defaulted to on with lz4, and lz4 is basically a free lunch) that thumbrule goes out the window.
Wow... I really did not know that. I see those reccomendations quite some times. Thanks cyberjock. Have some place to go to for more info on this?

Or, more precisely, it kills you less. Some of the interactions between things are nonobvious.
Like...?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Toadlips

Dabbler
Joined
Jan 29, 2015
Messages
20
Thanks for the info, guys! Yes, I decided to go for (6) 3TB drives for a total of 12TB in RAIDZ2. It's more space than I originally anticipated, but this should allow me to consolidate everything I have.

Tonight is FreeNAS Eve, which is the night before my hardware arrives! Thanks, Rilo, I will definitely follow the burn in guide. It will buy me some time to get up to speed with everything and to really figure out exactly what I want to do with this thing. There are a lot of options for system configuration, and I don't want to configure myself into a corner.

The only thing lurking in the back of my mind is that the i3-4160 may not be compatible with the X10SLL-F without a firmware upgrade. An answered question on Amazon said that they were shipping v2.0 of the firmware, though, and that appears to be the latest...so I should be OK! We shall see...
 
Joined
Mar 6, 2014
Messages
686
The only thing lurking in the back of my mind is that the i3-4160 may not be compatible with the X10SLL-F without a firmware upgrade. An answered question on Amazon said that they were shipping v2.0 of the firmware, though, and that appears to be the latest...so I should be OK! We shall see...
Ìf not, you can upgrade yourself quite easily.

Have fun reading and learning (read ALL of the build, test & burn in guide read it and follow ALL the tips given there).
Another tip: When doing testing, RTFM :eek: front to back, making notes on the subjects you might use and search and ask the forum & google everything that is not clear. You will benefit from that later.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I've bought my X10SL7-F on ebay about two months ago and it come with the 2.0 BIOS, no problem with a i3-4360 ;)
 

Toadlips

Dabbler
Joined
Jan 29, 2015
Messages
20
Yes, indeed! the X10SLL-F arrived from Amazon with the 2.0 BIOS, so I didn't have any trouble with the i3-4160! I'm not opposed to firmware updates, but I didn't have an older 1150 socket CPU available if the 4160 didn't work. Some odd things happened with the installation of the hardware and the software that kind of concern me, but I'm not quite sure what to think. When I booted up the machine for the very first time, one of the drives was missing in the BIOS. So I shut everything down and checked the connections. There were no issues with the connections that I could find, but I "firmed them up" anyway. When I started the machine up, all of the drives were there. It's just a little disconcerting because I couldn't find any obvious issues with the connection and then it just worked after that. Any logical reason why 1 drive (SATA0, coincidentally) would not be detected on first boot, but would be detected on subsequent boots? I haven't tested the drives yet.

Also, the first time I tried to install freeNAS, the installation menu froze, right where it gives the 4 options, 1 Install/Upgrade, 2 Shell, etc. I guess it's possible that I didn't wait long enough or something, but I get the impression that when that menu appears I should be able to select an option immediately. I had read that freeBSD is "unstable" with USB3.0, and both of the thumb drives (source & destination) were on USB3.0 ports, so I moved the source thumb drive to a USB2.0 port, wondering if that would help. It did, and freeNAS installed without a hitch, but then I was curious...so I put that thumb drive back in the USB3.0 port and tried it again. Sure enough, it worked. Very strange. I don't like non-repeatable failures.

At this point, freeNAS appears to be up & running. I ran memtest86 for several hours last night and it passed, so I'm still a little curious about my installation anomalies. Next step is to exercise the hard drives!
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
To me the missing drive was just a bad connection and you've solved it by moving a bit the connectors (just a thin layer of oxidation can lead to this kind of thing). BTW if you don't already have it, I strongly recommend the latching type connectors for SATA cables, it's like night and day ;)

If you've disabled the USB3 support in the BIOS the thumb drive should work on the USB3 port; if not it's normal, personally with USB3 enabled I can't even get to the install menu (yeah I've tried just to see what happens :D)
 
Last edited:
Status
Not open for further replies.
Top