BUILD Synology Refugee: Critique my Z-3 Build!

Status
Not open for further replies.

corfe83

Dabbler
Joined
Feb 27, 2015
Messages
11
Background (tl;dr: RAID-1 sucks, ZFS FTW!)
I have nearly 1TB (and growing) of family data currently shared over a synology DS209 box (pictures, home videos, music & other media, etc.). The box is 2-bay with mirrored 1TB drives + 1 additional 2TB USB drive to manually backup to, occasionally. None of these drives has died, but I have recently noticed a corrupted folder on the main mirror array (of family pictures I really can't stand to lose!), the corruption also exists on my backup drive, because I didn't manually inspect every directory and file in my 1TB collection before each backup ;), and I wrongly assumed hard drive read errors, SMART status, and raid mirroring would generally take care of this problem :mad:.

After researching, I realize RAID mirror is not nearly as safe as I once thought, and after reading about ZFS, it sounds like exactly the kind of design I want to handle my family's storage. I was a longtime linux junkie and am comfortable with the commandline, and though I don't have much experience with FreeBSD yet I'm willing to build my own box, and learn to do it. As a top priority, I want the best data security for my family, with fast access speeds within the home (home is wired for gigabit ethernet), without paying some cloud host an arm and a leg every month.

I have already read through cyberjock's guide, and am excited to build a server I can trust with my data, and I intend to setup a weekly scrub.

Typical / Max Usage:
This is only to be used within the home, mostly with ethernet wired windows PCs, occasionally a laptop over wireless, example most strenuous imaginable performance scenario is 3 different people streaming HD movies off the box, with a 4th user uploading files to it.

Hardware:
Hard drives: 8 x WD Green 3TB (Planning Raid Z-3 4 + 3, and the 8th drive will be an unplugged spare in case a drive dies, also I will run the head park tool to disable head parking on all of these drives including the spare). I already know Raid-z3 might be overkill, but I am comfortable with the cost and resulting volume size, and I want maximum safety.
Case: Fractal Node 804 (and buying 1 additional 140mm fan, so there will be 2 fans in front and 2 in back)
CPU/Motherboard: ASRock C2750D4I Motherboard (mini-ITX, Intel Atom Octa-core) - possibly overkill?
RAM: 2 x Crucial 16GB "kits" of ECC Unbufferred DDR3 (model name CT2KIT102472BD160B) for 32GB total RAM - also possibly overkill?
Power Supply: Antec Earthwatts Platinum EA-550 (I feel that 550W power supply is overkill for this setup, but I wanted to get a platinum PS so that it is really energy efficient (for lower electric bill and lower case temperature). Still considering backing down to gold, silver or bronze.
UPS: Cyberpower EC550G (this is for home use, intended purely for staying alive if power flickers, or safe shutdown if power stays down)
Boot: 2 x 16GB Cruzer Fit (forgot this in initial post)
Location: I'm going to put the system in my basement (where it is always cool), a few inches off the ground (in case there is ever water on the floor).

Local Backup Plan:
With only 1TB of data to store right now (but growing), I intend to backup to a dedicated 2TB USB drive (manually, sometimes, and store it in a different room, in 3 nested EMP resistant bags when not in use in case of solar flare). I will upgrade this external drive's size when 2TB is no longer sufficient.

I will additionally backup my data within the main volume, since I have way more space right now than I need. In some future year when I eventually run out of space in this manner, I will upgrade the size of all drives in the pool to keep this as a possibility.

Still researching what tool to use for these, maybe rsync, or maybe just a plain file copy (preserving timestamps).

Offsite Backup Plan:
Still figuring this out. Maybe backblaze (either iSCSI + my own windows desktop client or possibly running windows inside a VM on my NAS to do it, if that is even feasible).

I've run some numbers, and Amazon Glacier / Google Drive / MS OneDrive all seem too expensive for the amount of data I want to store. I think I'd be willing to pay up to about $10/month, which I might do to back up just 1 TB of data (which will soon be insufficient, in which case I will backup the most important subset of my data such as family photos).

I'd also be willing to spend the time and money to burn some blu-ray media for an emergency type of offsite backup, but I haven't researched this yet.
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
There have been users that have reported problems using the on board SATA ports that are connected to the Marvel controller on that motherboard. Not sure if that's been worked out or not but it's something to consider before you pull the trigger on parts.

If you're going to have 8 drives why not just run all 8 drives? What happens when you go to hook up the 8th drive that's been sitting forever as a spare and is now out of warranty and it's DOA?
 

corfe83

Dabbler
Joined
Feb 27, 2015
Messages
11
There have been users that have reported problems using the on board SATA ports that are connected to the Marvel controller on that motherboard. Not sure if that's been worked out or not but it's something to consider before you pull the trigger on parts.

If you're going to have 8 drives why not just run all 8 drives? What happens when you go to hook up the 8th drive that's been sitting forever as a spare and is now out of warranty and it's DOA?
Thanks for the reply. Maybe you're right. I did plan to plug the 8th drive in when it arrives briefly, to make sure it's alive and disable the head parking bit (to make sure I don't forget later). I originally wanted to improve lifespan on the drive, and save energy by keeping it unplugged after that, but that wouldn't catch the case that the drive fails a few weeks or months later Maybe I will keep it plugged in as an available spare, then. If I'm not doing much I/O with the drive, wear and tear is probably low anyway.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Or just use it so you have that space available and not just sitting there doing nothing. This is a home build not something mission critical. Your pool can still survive a single drive failure while you wait for a replacement to be shipped.
 

corfe83

Dabbler
Joined
Feb 27, 2015
Messages
11
Or just use it so you have that space available and not just sitting there doing nothing. This is a home build not something mission critical. Your pool can still survive a single drive failure while you wait for a replacement to be shipped.
Cyberjock's guide indicates it's best to go with "power of 2 + num parity disks", in my case 4 + 3 (for z-3), so 7 active disks, so that striping falls on the 4k boundary for newer disks. Upgrading my pool to Raid z-4 felt a little silly, even to my paranoid mind, so that's why I went with 7 active disks. Or I might just kill off the spare idea.

BTW, thanks for pointing out the ASRock SATA issue. I'm still looking into it to see if it will work.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
This is outdated since compression is enabled by default ;)

There is no RAID-Z4...

Hot spare is stupid for home usage, you'd better use the drive in the array than it sitting here doing nothing. With only 8 drives in RAID-Z3 I think you can even not having a cold spare if you know you can order and receive a drive within a few days (it's what I chosen to do) :)
 

corfe83

Dabbler
Joined
Feb 27, 2015
Messages
11
Thanks for the replies, I'll not order the spare, just leave it at 4 + 3 drives. Any comments on CPU (is it overkill), RAM, power supply?

I also appreciate feedback on the offsite backup. I'm thinking I might backblaze my external USB backup (from my windows desktop).
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
CPU: Samba is single-threaded so a fast clocked dual core is better than a slower octo core. You've checked that it's support ECC ?

PSU: generally you pay far more to go from gold to platinum than the gain in electricity is. Don't think about the case temperature because the PSU always reject the hot air outside the case (moreover 10% of 50W is 5W for example, this is ridiculously low so even if the PSU's heat is rejected in the case it wouldn't be significant) ;)

I'm curious to know what you're talking about exactly when you're saying "EMP resistant bags"?
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Thanks for the replies, I'll not order the spare, just leave it at 4 + 3 drives. Any comments on CPU (is it overkill), RAM, power supply?

I also appreciate feedback on the offsite backup. I'm thinking I might backblaze my external USB backup (from my windows desktop).

Everything you've picked out should serve you well for the usage scenario you have described.

Here's a review that Cyberjock wrote up a while back on the freenas mini. It uses the same motherboard you are looking to use.
https://cyberj0ck.wordpress.com/2014/05/05/my-review-of-the-freenas-mini-part-1/
https://cyberj0ck.wordpress.com/2014/05/05/my-review-of-the-freenas-mini-part-1/
 

corfe83

Dabbler
Joined
Feb 27, 2015
Messages
11
CPU: Samba is single-threaded so a fast clocked dual core is better than a slower octo core. You've checked that it's support ECC ?

PSU: generally you pay far more to go from gold to platinum than the gain in electricity is. Don't think about the case temperature because the PSU always reject the hot air outside the case (moreover 10% of 50W is 5W for example, this is ridiculously low so even if the PSU's heat is rejected in the case it wouldn't be significant) ;)

I'm curious to know what you're talking about exactly when you're saying "EMP resistant bags"?
That's a good point, maybe sticking to Platinum doesn't matter as much as I think.

Actually a bag may not be the best way, I haven't yet researched, but what I want is some kind of faraday cage, useful in the event there is a really large solar flare (or an EMP attack). While this is quite unlikely, I'm going for maximum safety, and it seems a relatively easy/inexpensive thing to protect against compared to my Z-3 setup above, as long as you are aware of the possibility and what to do to protect against it.

A quick search shows this as an example (not my blog): https://levels.io/backups-solar-flares-cookie-jar-faraday/
 

corfe83

Dabbler
Joined
Feb 27, 2015
Messages
11
Everything you've picked out should serve you well for the usage scenario you have described.

Here's a review that Cyberjock wrote up a while back on the freenas mini. It uses the same motherboard you are looking to use.
https://cyberj0ck.wordpress.com/2014/05/05/my-review-of-the-freenas-mini-part-1/
Thanks, actually that gives me a bit more confidence that I won't be screwed if I buy ASRock.

I'd generally prefer to buy supermicro as a brand, but supermicro's boards only have 6 SATA ports, and unfortunately their C2750 does NOT have the intel brand ethernet controller (it's Marvell Alaska 88E1543). I'd probably pull the trigger on it if it just had 8 or more SATA ports.

Thanks for replies everyone, I'll probably order on Monday after I stew about it a little longer.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
That's a good point, maybe sticking to Platinum doesn't matter as much as I think.

Do the math and you'll know ;)

Actually a bag may not be the best way, I haven't yet researched, but what I want is some kind of faraday cage, useful in the event there is a really large solar flare (or an EMP attack). While this is quite unlikely, I'm going for maximum safety, and it seems a relatively easy/inexpensive thing to protect against compared to my Z-3 setup above, as long as you are aware of the possibility and what to do to protect against it.

I just want to be sure you don't talk about the antisatic bags because they'll don't protect the drives from solar flares.

A quick search shows this as an example (not my blog): https://levels.io/backups-solar-flares-cookie-jar-faraday/

So this guy protect his drive with a ferrous faraday cage which is far more resistive than the drive aluminum case? this is pretty ridiculous... Edit: and the lid doesn't make any contact (let alone a good low impedance contact) with the jar because of the paint. His method is definitely useless :rolleyes:
 

corfe83

Dabbler
Joined
Feb 27, 2015
Messages
11
I just want to be sure you don't talk about the antisatic bags because they'll don't protect the drives from solar flares.



So this guy protect his drive with a ferrous faraday cage which is far more resistive than the drive aluminum case? this is pretty ridiculous... Edit: and the lid doesn't make any contact (let alone a good low impedance contact) with the jar because of the paint. His method is definitely useless :rolleyes:
I have little experience in the matter, I have two data safety things to look into after I buy my NAS: solar flare protection, and offsite backup. I will point out that Faraday cages do not have to be sealed, they can be a mesh, so the paint may not be a problem (though I am not an expert on the subject). http://en.wikipedia.org/wiki/Faraday_cage#mediaviewer/File:Cage_de_Faraday.jpg

When I originally said "EMP Bags" I was talking about something like this: , which claims it is not the same as an ordinary anti-static bag, though I have not taken the time yet to have an opinion of whether this claim is credible, the manufacturer is defending the product: http://www.amazon.com/review/R1WHNV...2PTQ8X5OEQA50&store=industrial#Mx6YXGWUFEH9NA

What would you recommend for solar flare protection? I'm truly at the start of my search and open to ideas.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Consider the SuperMicro X10SL7-f instead of the ASRock board--the LSI SAS controller is known to work well with FreeNAS, and it gives you more flexibility with CPUs. It may even work out to be less expensive than one of the Avoton boards.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
The Supermicro Avoton boards have Intel Gbe controllers. Marvell is just the transmitter between the chipset integrated ethernet ports to a RJ45 socket.

I'd recommend a 6disk raidz2. for the savings grab some WD Red, HGST NAS and Seagate NAS HDDs instead of 8 HDDs of the same model. if 8 Greens go bad and die, you don't have anything left. if two Reds die, the 2 HGST and 2 Seagate still keep all your data perfectly fine. it's unlikely that 3 different models die at the same time.

And to anyone who still dislikes Seagate: one or two models of them were bad - which haven't been the best disks for Backblazes use case anyway. WD Reds had the Firmware issue far more recently. HGST 2.5" drives are dropping dead as well.

My preference would be:
Fractal Node 304
Seasonic G-360 PSU
ASRock MT-C224 (the written specs are off. manual and pictures are stating that it's a DTX board, means ITX with another PCIe slot. that's the standard size of many "ITX" cases.)
Pentium G3220
2x8GB ECC 1.35V DIMMs from Samsung
2x HGST NAS $size
2x WD Red $size
2x Seagate NAS $size
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
...and the likelihood of 8 burned-in and tested (and wdidle'd, in the case of WD Green) hard drives failing simultaneously would be...? Or even, in OP's proposed configuration, five of them? Do you really think the benefit from mixing up the drives would outweigh cutting the redundancy in half?
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
it's not cutting redundancy "in half" as z3 uses 3 redundancy drives and z2 uses 2 redundancy drives.

each HDD model is build the same way. same headers, same platters, same controllers, same everything. now there's an issue. the arm breaks after 20k hours. boom. all 8 disks affected. everything gone.

now there's an issue with 3 different models: only 2 HDDs would be likely to be affected at the same time. raidz2 can handle 2 failures without problems. all data is still here. and how likely is that? extremely likely. the typical "raid1 built from the same drives at the same age died at the same time" does happen more often as one would think.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
it's not cutting redundancy "in half" as z3 uses 3 redundancy drives and z2 uses 2 redundancy drives.
Yes, but OP's proposed configuration was a 7-disk RAIDZ3 plus a cold spare. True, if four or more disks failed within 24 hours, he'd be hosed--but with any time at all to resilver, he effectively has a fourth disk of redundancy. It's also worth noting that, as of the latest update, FreeNAS 9.3 appears to support hot spares, so the reaction time for OP to recognize a disk failure, swap in the replacement, and initiate the resilver can be removed from the equation (at the expense of having that disk powered on the whole time).

I'd think a better solution than either of these, though, would be to just set up a six-disk RAIDZ2 with whatever flavor of disk is preferred, and use the savings for a crashplan (or Tarsnap, for the truly paranoid, or other offsite backup) account. All the RAID redundancy in the world won't help you if your power supply fries your disks, or the server is destroyed in a fire, or you inadvertently do rm -rf /. Going from RAIDZ2 to RAIDZ3 is pretty strongly into "diminishing returns" territory already.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thanks, actually that gives me a bit more confidence that I won't be screwed if I buy ASRock.

I'd generally prefer to buy supermicro as a brand, but supermicro's boards only have 6 SATA ports, and unfortunately their C2750 does NOT have the intel brand ethernet controller (it's Marvell Alaska 88E1543). I'd probably pull the trigger on it if it just had 8 or more SATA ports.

Thanks for replies everyone, I'll probably order on Monday after I stew about it a little longer.

It's just a PHY for the Avoton's integrated quad/GbE controller. It'll work fine.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Yes, but OP's proposed configuration was a 7-disk RAIDZ3 plus a cold spare. True, if four or more disks failed within 24 hours, he'd be hosed--but with any time at all to resilver, he effectively has a fourth disk of redundancy. It's also worth noting that, as of the latest update, FreeNAS 9.3 appears to support hot spares, so the reaction time for OP to recognize a disk failure, swap in the replacement, and initiate the resilver can be removed from the equation (at the expense of having that disk powered on the whole time).
it is not stated in the documentation so I doubt it is supported. still different charges/models have HIGHER resilience against ALL disks dying at the same time. what if:
in 3.5 years there's a firmware bug visible on $model, of which he is using 8 in a precious raidz3 and warm spare. and BOOM all data gone. horrible. nobody wants that when he plans for an EMP safe chassis.
in 3.5 years there's a firmware bug visible on $model, of which he is using 2 in a boring raidz2 without any spares on a far cheaper system due to lack of an additional controller. BOOM. still all data visible since the other 4 disks held well.

I'd think a better solution than either of these, though, would be to just set up a six-disk RAIDZ2 with whatever flavor of disk is preferred, and use the savings for a crashplan (or Tarsnap, for the truly paranoid, or other offsite backup) account. All the RAID redundancy in the world won't help you if your power supply fries your disks, or the server is destroyed in a fire, or you inadvertently do rm -rf /. Going from RAIDZ2 to RAIDZ3 is pretty strongly into "diminishing returns" territory already.
basically my point. don't invest into building tons of redundancy into a single chassis but spread it across multiple chassis and locations.
 
Status
Not open for further replies.
Top