Getting ready to place my new build order...

joe_ledger

Cadet
Joined
Mar 21, 2019
Messages
8
So I'm getting ready to place my order for my new freeNAS build and was hoping for some last minute validation from other users for any potential issues I may have here:

CPU: Intel - Xeon E3-1240 V6 3.7 GHz Quad-Core Processor
Motherboard: Supermicro - MBD-X11SSH-F Micro ATX LGA1151 Motherboard
Case: Fractal Design - Node 804 MicroATX Mid Tower Case
Power Supply: SeaSonic - FOCUS Plus Gold 650 W 80+ Gold Certified Fully-Modular ATX Power Supply
Other: Supermicro MEM-DR416L-CV02-EU26 16GB (2x16GB) DDR4 2666 (PC4 21300) ECC Unbuffered Memory RAM

Haven't quite decided on my drive setup yet, that's why it's not included in the above listing.

Anything I should be concerned with?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Looks good. What will you be using this for? If Plex is in the mix, consider a 1225 or 1245 - hw transcode will surely be a thing, maybe even as soon as a month from now :).

On the SSH you can boot from a PCIex2 drive if so desired. Or you can use that slot for an L2ARC if your use case warrants it.
 

joe_ledger

Cadet
Joined
Mar 21, 2019
Messages
8
Yah, Plex is in the mix. I don't have to do a lot of transcoding - most downstream devices are direct stream compatible (a couple of nvidia shields, a roku, and I think a chromecast is in the mix - plus your usual browser session stuff). I also make available 1080p and 4K versions of the same piece of media and let the downstream device determine the best source (shields are amazingly good at this). But I will take a look at the 1245.

So, most likely, I'm just going to be using mirrored pairs for easy expansion in the future and not having to deal with long resilver times (I'm talking pairs of 10TB/12TB drives), so the need for L2ARC in this situation seems excessively overkill (but would definitely fit in the theme of this build).

Since you brought up the subject of boot drive, I'm curious what experience have with using a PCIe SATA card to add a couple additional SATA ports specifically for boot? I'd like to hold the onboard ports of the mobo for the actual HDD arrays that will be installed.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I haven’t used a PCIe SATa card. The general consensus around these parts is that many of them are not very reliable, and to use an inexpensive HBA in IT mode instead. Boot off the motherboard SATA ports.

I am booting from an M.2 PCIe drive, and using the onboard SATA ports for drives. That works very well.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Shouldn’t be that much. My M.2 was 45 bucks. Just make sure it’s PCIex2, not SATA, M.2. There are some BIOS settings after that to make the boot work.

SAS HBA in IT mode, likewise, is around 40 bucks on eBay.

I like things “clean”, meaning fewer board and cables is better, that’s why I am booting from that M.2 and am using on-board SATA for drives.

You are going for IOPS with that drive layout. Unless you are serving an ESXi server, you won’t need it. You could consider raidz2 on 8 drives, or raidz3 if you’re feeling especially paranoid. The 8TB drives are the $ per TB kings right now.

Edit: Consider 5400 rpm drives as well. 100MB/s will saturate a Gbit link. Less vibration and heat matters in a tiny case like this. The 120MB/s of a 7200 rpm is of questionable utility in this case. I used “shucked” drives to get HGST He8 spinning @ 5400 rpm. So far so good.
 

joe_ledger

Cadet
Joined
Mar 21, 2019
Messages
8
Ahh, I was using the compatible tested list of m.2 drives from SuperMicro to identify which drive to add to my list. I'll have to do some more digging to see what I can find.

I had a really bad experience once upon a time with my last freenas build (granted, this was pushing more than a decade ago at this point) that put a really bad taste in my mouth when it comes to higher count arrays (had 2 drives fail at once and lost an entire 8 drive array) - so I'm trying to move away from anything larger that 2-4 drives per array. The mirrored pair setup is what I feel is the best bang for my buck when factoring in size and performance for my needs.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'm booting off a Patriot Scorch on that board. Haven't personally tried a Corsair but expect it would work just as well. https://www.ixsystems.com/community...boot-drive-it-will-freenas.72816/#post-504664

I hear you on shorter resilver for mirrors. What works for you, does. About "2 drives fail at once", I think that's why there's the general consensus of "raidz1/RAID5 is dead". raidz2 - lose 2 drives and still be up and running - for everyday stuff that's backed up; raidz3 if you need more.

How you'll fare depends a bit on how you set those mirrors up. If they're all in one pool, you can lose 1 drive. Lose a second and you're dead, or as good as - you could get lucky and lose one in another mirror. You do get a lot of lovely IOPS.

What I'm reading on here for stuff up to 12 drives, and what makes sense to me, is:

Mirrors combined in a pool if you need IOPS and you have a solid backup you can recover from. The risk of 2-disk failure is acceptable; poor app performance bcs of lack of IOPS is not.

Raidz2 (12 drives would be "raidz2x6 times 2") for "human-level" caution. You have a backup but you really need your data to stay available.

Raidz3 when going down just isn't an option. You have a backup but needing to use it is a potentially career-ending move. You won't stick more than 8-11 drives in a single raidz3.

Your "max 8 drives" setup seems to say raidz2. Your resilience in raidz2 is strictly better than 2-mirror x 4. Unless you want to do 3-mirror x 2, 6 drives. You get the capacity of two of those drives, good resilience, and better IOPS than a single raidz2.

All of which is just meant to be food for thought. I don't think there's a "wrong answer" for drive layout, there's just IOPS, risk, and risk mitigation.
 
Last edited:

joe_ledger

Cadet
Joined
Mar 21, 2019
Messages
8
I actually added that same drive you're using to my list, so that'll save me some pennies (until I go and buy Noctua fans to try and silence as much as possible).

Gotcha. That is some food for thought. What I had planned is actually several isolated mirrored pairs - think 2x 2x10TB as a start. Each pair will hold a different set of data (think TV on pair 1, movies on pair 2).

This does not include a couple of one off SSD drives that will be installed as well (I plan to have 1x500GB for jails, and 1x1TB for temp storage/processing) that will ultimately be backed up to another pair of mirrored drives.

All together I imagine my setup to be:
1x2x8+TB HDD
1x2x8+TB HDD
1x2x4TB HDD (this is ultimately going to be my network wide backup location)
1x500GB SSD
1x1TB SSD

That will saturate the 8 SATA ports that come with my mobo. IF (let's be real here, WHEN) I run out of space, the mirrored pairs will be easy to grow just by swapping out a larger drive > cloning data > swapping out other drive > repeat. The same could be said for drive failures (as long as I mix and match drives so I don't use two from the same manufacturing run, the likelihood of dual drive failure should significantly decrease).

Yes, I could always expand pools with zfs - but between the minimum drive requirements and resilvering time, I don't think it's an effective use of time/resources. I do have plans for making use of long term cloud storage as well, but that's well after I get my hardware functioning the way I want.
 

Bozon

Contributor
Joined
Dec 5, 2018
Messages
154
Our Datacenter elves would always make sure that their drives were from different batches on the same machine. It might have been pure superstition, but you lose a couple of drives from the same batch, you become very superstition about these things.
 

joe_ledger

Cadet
Joined
Mar 21, 2019
Messages
8
Our Datacenter elves would always make sure that their drives were from different batches on the same machine. It might have been pure superstition, but you lose a couple of drives from the same batch, you become very superstition about these things.

Yah, in my post above about losing 2 drives at once - a young, inexperienced joe_ledger made the mistake of ordering 8 drives at once from the same batch. I had 2 drives fail mechanically at the same time during a power outage. Lost my entire media library in the process.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
"I plan to have 1x500GB for jails "

Quite possibly not necessary as well. I have yet to see hard data that shows a performance advantage of doing that vs. just running the jails off a pool.
 

joe_ledger

Cadet
Joined
Mar 21, 2019
Messages
8
"I plan to have 1x500GB for jails "

Quite possibly not necessary as well. I have yet to see hard data that shows a performance advantage of doing that vs. just running the jails off a pool.

To be fair, the SSDs are repurposed drives from gaming builds over the years. I just updated my current system to a 1TB NVMe m.2 drive so the 1TB SSD I was using got pulled. the 500GB drive is from a previous HTPC before I discovered how awesome the nvidia Shields are.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I just updated my current system to a 1TB NVMe m.2 drive

I'm assuming you're doing more than gaming on there, then. If you look at what games do during load - have Task Manager running and watch disk transfer - you'll see they don't go above 230-270MB/s. Well below what a SATA SSD can deliver. That's the reason NVMe doesn't show any performance advantage for games: Games can't consume data as fast as a SATA SSD delivers it, never mind an NVMe.

That said, I do get the urge. I was about to put in an NVMe drive until I saw those data, then decided to go for 2TB SATA M.2 instead. "Right tool for the purpose" and all that.
 
Top