Advice needed for Atom C3558 build

Antioch18

Explorer
Joined
Jun 29, 2012
Messages
55
Building a new NAS and want to run things by folks with more ZFS experience just to make sure there aren't any oversights.

Scenario:
Replacing a 6-year old Atom D525 build; 2x4GB ECC RAM, raidz1 3x3TB WD Red, 16GB USB thumbdrive for boot. Used primarily as storage for the family photo/video albums, music and video collection. Due to constrained space in our home I want to continue to keep the server small and low-power. The new build will be used for the same purpose, but I'll also toss on a few small services (Duplicati, ZNC, an image viewing webapp [maybe simply use NextCloud?]), no VM or transcoding duties required.

New build:
(Note that several of these component choices will hinge on the answers to my questions below, but I'll add the options here for consideration)
Open questions:
  • Have seen on the forum that some folks are using a single stick of RAM instead of two and going dual channel - why is this? Purely for cost reasons?
  • Given that this is a media server, is it worth investing in a L2ARC and SLOG?
    • I think no, due to large sequential reads
  • Suggestions for raid makeup?
    • I know "when it rains, it pours" so it may not be recommended to go 3x8TB raid-z1, but I've seen several others on the forum currently doing it, as am I in my current build
    • The safer route is 4x8TB raid-z2, but I wonder if I could be doing something else, like 2x8TB mirror? Also, is raid-z2 in a 4-drive array even "ok" to do?
There are likely some considerations that I've overlooked, so do please ask/mention them!

Edit: Will be reusing the same case and PSU (350W) and selling the old drives.
 
Last edited:

Antioch18

Explorer
Joined
Jun 29, 2012
Messages
55
After doing some reading I have reached the conclusion that for large sequential reads, L2ARC and SLOG aren't beneficial, so I've removed that from my build and questions list.

I would appreciate some feedback regarding:
  1. RAIDZ configuration: z1, z2, mirrors?
  2. Number of RAM DIMMs
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
After doing some reading I have reached the conclusion that for large sequential reads, L2ARC and SLOG aren't beneficial, so I've removed that from my build and questions list.

I would appreciate some feedback regarding:
  1. RAIDZ configuration: z1, z2, mirrors?
    Tricky question this, with the larger drives these days, rebuild time can take a bit longer (depending how much data is on it and the speed of the drive), so if you do have to replace a drive in a z1 configuration, then you have to hope none of the other drives die in the process. Note during a rebuild all the other drives will get hammered, so higher likelyhood of another failure happening. But then I have never experienced this myself.

    However I did make the mistake of replacing the wrong drive once, so the z2 configuration saved me there.

    If you have reliable backups and the worst happens and if you dont mind some down time whilst you get the main system backup, raid array rebuilt and data restored. Then z1 is an option.

    I personally do z2, but then I'm running a 12 drives in 2x 6 z2 for both the main and backup.

  2. Number of RAM DIMMs
    I would run dual bank, if the motherboard supports single bank but has recommendation of dual bank for performance reasons, then its a no brainer for me, you have 4 slots so you still will have 2 spare left just incase you need to add additional ram later on.
 

Antioch18

Explorer
Joined
Jun 29, 2012
Messages
55
Thank you for your reply.

I wonder if it is "odd" to run raid-z2 on a 4-disk array?
 

Stubb

Dabbler
Joined
Apr 11, 2015
Messages
27
Are all Atom motherboards subject to the its timer bug that causes the mobo to brick after ~18 months like the ASRock C2750D4I (thread)? If so, stay away!
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
Are all Atom motherboards subject to the its timer bug that causes the mobo to brick after ~18 months like the ASRock C2750D4I (thread)? If so, stay away!

As far as Im aware, that bug was with the C2000 series of the atom, the board listed above is the C3000 series, my internet search only found results for the C2000 series....
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
Thank you for your reply.

I wonder if it is "odd" to run raid-z2 on a 4-disk array?

Maybe a little "odd", but I'm not here to judge ;)
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Thank you for your reply.

I wonder if it is "odd" to run raid-z2 on a 4-disk array?
Not odd at all. A bit inefficient in term of space though.
If you go with 2x 8TB mirror, the risk of loosing the pool is a bit higher than the 4 disk RDAIDZ2 because the worse case scenario would be to loose the 2 mirror in the single strip. On the other hand you can still afford losing one on each strip and you will be fine until one of the drive fails.

With RAIDZ-2, you can loose any 2 disk and you will be fine until you loose the 3rd one.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
You would be best served by buying (2) more drives and going 6x8TB RaidZ2. More painful upfront, will save you much more time and hassle than the upfront cost. I fully understand constrained budgets, however I cannot stress enough the savings in the long term.

Less wasted space to redundancy.
More time (years?) before a needed upgrade.
No need to mess around with either in place upgrades or destroying and rebuilding your pool when you do want to upgrade.
Fire and forget (one of my main goals), setup it up and not have to touch it again for years.

Start with your current storage need, add in your expected additional storage needs per year over the life span of the drives (drive warranty length usually), apply the 80% maximum pool usage (to avoid significant performance loss), then double it.

Example:
2TB current, add 1TB per year over 5 year warranty, 80% max usage, double that: ((2+5) * .8) *2 = 12TB

I'm running a 6x4TB Z2 on 10Gb network, I get 700MB/s writes until I fill the TX grp then it drops to ~350MB/s. Reads depend on if it's in ARC or not but usually 500MB/s+.

Couple hints I've gleaned from others:
  • DON'T be in a hurry
  • Do drive burn-in (bad blocks etc.)
  • Do platform burn-in (CPU burn, Mem test, etc)
  • Best to test/burn-in new hardware with empty (new) pools
  • When copying data to the new pool for the first time use MAXIMUM compression, when done change back to LZ4 for new data.
  • Don't upgrade to new significant FreeNAS versions for 3-6 months, or at least until they release the TrueNAS version.
  • Setup SMART tests, SCRUBs, and email alerts.
  • Run a UPS with auto shutdown of FreeNAS.
In the end the only reason to run FreeNAS over other platforms is to protect your data, so do it right with the minimum long term effort on your part.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
As far as Im aware, that bug was with the C2000 series of the atom, the board listed above is the C3000 series, my internet search only found results for the C2000 series....
That's my understanding also. IIRC, one of the reasons it took so long for the C3xxx series to hit actual store shelves (not just the reviewers over at servethehome, and other web sites) is that Intel had to devote significant resources to dealing with the c2xxx bug. i.e. Fab tons of replacement c2XXX CPUs instead of the C3xxx series, and so on. In turn, the OEMs that used those chips had to repair or manufacture replacement boards for their customers, a big headache and yet another reason that OEMs were showing off C3xxx series motherboards in 2017 but C3xxx online retail presence was sparse at best last year.

Intel is morally and likely at least partially financially responsible for making the OEMs they supply whole (i.e. Supermicro, ASUS, etc.). Significant costs are going to be associated with replacing / repairing motherboards, though the latter seems somewhat unlikely on account of the potential reliability issues. Besides, the bulk of the board cost seems to be in the CPU.
 
Last edited:

Antioch18

Explorer
Joined
Jun 29, 2012
Messages
55
You would be best served by buying (2) more drives and going 6x8TB RaidZ2. More painful upfront, will save you much more time and hassle than the upfront cost. I fully understand constrained budgets, however I cannot stress enough the savings in the long term.
...
Start with your current storage need, add in your expected additional storage needs per year over the life span of the drives (drive warranty length usually), apply the 80% maximum pool usage (to avoid significant performance loss), then double it.
  • When copying data to the new pool for the first time use MAXIMUM compression, when done change back to LZ4 for new data.

Thanks for your thoughtful reply. Indeed this is something to consider and I've been trying to figure out which way to go for nearly 2 weeks now. I have a 3x3TB Z1 pool that's been chugging along for 6 years now, but it's finally full and time to upgrade (the whole system - D525 was anemic when it first hit the market anyway, heh). I'm thinking that 16TB should be more than enough to last for another 5 years, and was originally considering getting 4x6TB in Z1, but 8TB disks can be had for $25 more dollars in my market, so I figured I'd jump to 4x8TB drives. However, I'm a bit nervous about putting that into a Z1 (and I believe others on the forum may caution against it).

As I mentioned, I'd really like to keep the drive count down in order to maintain a smaller footprint (space and heat-wise) -- space in our flat is constrained (but hopefully in 5 years it won't be!). I will consider a 5th drive, however.

At this point my choices seem to be:
  1. 3x8TB RAID-Z1
  2. 4x8TB RAID-Z2
  3. 4x8TB striped mirrors
Perf between 2 and 4 ought to be negligent in my case (the Denverton cores ought to be able to handle the parity calculations?), but somehow I feel that option #3 is the more dangerous of the two and not much different than option #1, given that in both cases if I lose 2 drives from the same vdev, all data is lost.

Additional thoughts and feedback are much appreciated.

Cheers.
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
In this home ptoject integrity is what you need, not loosing your data is more important than space.
Try to avoid z1.
z2 would be the safest, i would go for that in your case with a 4 drive setup
What i personaly avoid is custom build systems, i rather go for a branded entry level server off the used market.
You can find some hp microserver g8 for a good price on ebay or other. If eficiency is what you want, you can go for the celeron cpu an 16 gb ram.
The whole setup with 4 drives will consume somewhare 50 to 65 watt. And they are realy silent.
They have 1 internal usb port also and 1 internal microsd slot, 1 extra sata port, 1 pci-ex 16x port, ilo 4, 6 usb bort.
And they look cool
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
because the worse case scenario would be to loose the 2 mirror in the single strip.
Just so you know, and anyone else that is curious, we do see it more often than you might think. Read here:
https://forums.freenas.org/index.ph...fter-failure-of-2-hard-drives-out-of-4.72500/

For drives larger than 1TB, RAIDz2 is suggested as a precaution against losing your data. Absolutely, mirrors are more convenient in some ways and depending on the circumstances, mirror vdevs can be faster, given the same number of drives, but you lose redundancy especially if your drives are all the same age, from the same batch. Just last year I had two drives in the same RAIDz2 vdev fail within seconds of each other. Because I was using RAIDz2, I suffered no data loss, but if I had those two drives in the same mirror vdev, I could have.

Food for thought.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I'd prefer the Z2 approach but like Jessup, I'd opt for a 6 drive Z2 array. If necessary, go with "smaller" drives now (to get the same TB capacity) but a 6-drive Z2 vdev is a standard item for a reason - it's fairly efficient re: parity, quick, and if you later want to upgrade the drives, there are many options - ranging from adding another 6-drive vdev to the pool (i.e. creating a 12-drive array) to replacing and resilvering the drives one by one with higher-capacity drives.

If energy efficiency is of concern, review farmerpling's amazing spreadsheet. I chose used helium-filled HGST He10 drives as a result of his information. The drives come with 3-year warranties, have low power consumption. They also run much cooler than the 3TB HGST 7K4000 series I had before (31*C vs. 36*C and up). So, if you are based in the US, don't discount going used as long as the retailer is reputable. I had to replace one of the He10 drives so far (SMART errors) and I got a pre-paid shipping label, etc. which I don't even get from OEMs when their stuff fails under warranty.

I understand that many professionals here wouldn't touch used drives but it's one way to ensure that none of the drives come from the same batch and as such should be expected to fail in a more random fashion. :)

Plus, I run a Z3 array and have backups also. FWIW, the backups are based on shucked WD easystore and like drives (i.e. "white-label" gear) with OEM warranties ranging from 2-3 years. Shucking external enclosures is another way to obtain hard drives that approach NAS quality at much lower cost. Just be careful to research which drives to expect inside a external enclosure. The "good stuff" at WD starts at 8TB and up, IIRC.

I continue to be baffled re: being able to buy a external drive for less money than a similar internal drive but I presume WD and its competitors have done their homework. For me, it just means another box in the basement with all the external cases so I can return a drive in its original state if I have to get a warranty replacement. Just be patient as you shuck (there are great online tutorials) so you don't destroy the tabs that hold the enclosures together. A old credit card or guitar picks do a great job with the WD enclosures.
 
Last edited:

Antioch18

Explorer
Joined
Jun 29, 2012
Messages
55
I am curious, what case is that?
Homemade case I built at a friends machine shop, similar in dimensions to what you can get from Synology et. al. in a 4-drive system, except mine is 4.5 (fits an 2 extra 2.5" drives) and has an internal PSU (medical grade AD/DC converter with DC/DC HDPlex converter).

Building the case was fun, but I'd certainly love to have a larger system someday.

I'd prefer the Z2 approach but like Jessup, I'd opt for a 6 drive Z2 array. If necessary, go with "smaller" drives now (to get the same TB capacity) but a 6-drive Z2 vdev is a standard item for a reason - it's fairly efficient re: parity, quick, and if you later want to upgrade the drives, there are many options - ranging from adding another 6-drive vdev to the pool (i.e. creating a 12-drive array) to replacing and resilvering the drives one by one with higher-capacity drives.

If energy efficiency is of concern, review farmerpling's amazing spreadsheet. I chose used helium-filled HGST He10 drives as a result of his information. The drives come with 3-year warranties, have low power consumption. They also run much cooler than the 3TB HGST 7K4000 series I had before (31*C vs. 36*C and up). So, if you are based in the US, don't discount going used as long as the retailer is reputable. I had to replace one of the He10 drives so far (SMART errors) and I got a pre-paid shipping label, etc. which I don't even get from OEMs when their stuff fails under warranty.

Very interesting spreadsheet - thanks for sharing! And thanks for your feedback, it's useful for consideration. Unfortunately, the external drives from WD are the same price as off-the-shelf Reds in my market. I'm quite sad about this. :(

Sadly, I'm stuck with analysis paralysis. Too many options to consider, and no "right" answer.
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I know how you feel. Looking back, I likely would have been better off with used 8TB He8 drives simply because they're available used with 5-year warranties. On a $/TB basis the price is very similar to the 10TB series but the warranty is 2 years longer. Best of luck.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Just a sanity chime in: yesterday I came across some of my old notes and found there that BX500 is on my no-go list... Unfortunately I can't remember why. You can consider doing some STFW...

Sent from my phone
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I know how you feel. Looking back, I likely would have been better off with used 8TB He8 drives simply because they're available used with 5-year warranties. On a $/TB basis the price is very similar to the 10TB series but the warranty is 2 years longer. Best of luck.
Where do you find them used with a 5 year warranty?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Where do you find them used with a 5 year warranty?
Here over at gohardrive.com. Over the holidays, this reseller was active on Amazon, selling them at $186 ea, IRRC. The current price is much higher at $210 ea. However, I'd simply look and see when the Amazon listing re-activates at the lower price point. 5 years is longer than the OEM warranty, right?
 
Top