Evolutive Supermicro build starting "small" | Will it FreeNAS?

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
Dear FreeNAS community,

After recently submitting an Intel Atom C3XXX build to the community and having had really valuable feedback, I thought a bit more about the kind of build I needed. I went back to the forum resources and would like to submit you a build answering the following key points:
  • It will be used as a file server for our home backing up a couple laptops, storing raw photo files (25MB each) and serving them for edition on a remote machine.
  • It will in addition hosts the following services:
    • a NextCloud server serving two remote clients at max,
    • a Plex server serving a local client (4K TV set mostly with 1080p content), no intention to remote yet,
    • a bittorrent client.
  • The build is thought to host a four disk array to start with. It is designed to be able to receive an additional four disks array in the future if needed.
The build I am thinking about is the following:

A few thoughts:
  • The build is thought to be evolutive, what does that mean ? It means I am starting with a "small" setup (Pentium G, 16GB of RAM, four disks) that I can easily expand with time:
    • The motherboard has 8 SATA connectors. I am starting with only four disks but I could totally add another four in the future if I feel like it.
    • RAM can be expanded up to 64GB. Classic.
    • The CPU can be upgraded from the Intel Pentium G4560 to a more capable Xeon E if needed because the are both compatible with the LGA1151 socket of the motherboard.
    • The case can receive up to eight 3,5" disks.
  • This motherboard has a M.2 PCIe connector, so I'm using it to boot from a NVMe SSD as a boot drive and consequently save the full eight SATA ports for disks.
  • The Pentium G4560 can transcode a couple of 1080p video files and even have Intel Quick Sync technology for hardware transcoding by Plex as recently supported in FreeNAS 11.2.
  • The case can accommodate many fans for sufficient cooling. It seems people (like @Kevin Horton) have managed to build cool and quiet machines with it. Also, it doesn't look too bad and will sit somewhere in my living room.
  • I'm having a hard time picking up a PSU though. I estimate that a 450W PSU would be totally sufficient for the peak consumption of a Pentium G and four disks according to the method used by @xdma (see here). For eight disks, a 550W PSU is required though. I guess it's no big deal to oversize a PSU by using a Gold 550W where a Gold 450W would do the job. Any suggestion is welcome.

What do you think of this build? Do you think an evolutive build like this one makes any sense? I recall that @Chris Moore said he frowns every time he reads people saying they'll "add disks later". Should I simply go with a more minimalist 4 disks build with a cheaper Supermicro board such as the X11SSL-F, a simple SATA SSD and a smaller case and PSU?
Please, let me know your thoughts. Much <3
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
There's nothing inherently wrong with "adding disks later". For your use case, RAIDz2 is probably a good choice. If you had a server case that held 24 disks, one could start with a six disk RAIDz2 pool and add 3 additional vdev's of 6 disks in RAIDz2.

Six disks in RAIDz2 is one of the sweet spots for redundancy and overhead. Your case only holds 8 drives. You might want to consider going with an 8 drive RAIDz2 pool. But, you'll have to add all 8 drives up front ($$$). Starting with 4 drives in RAIDz2, you'll lose one-half of your storage to parity. When you add the next 4 disk vdev, you'll lose another 2 drives to parity. Starting with all 8 drives in a RAIDz2 volume, you only lose two drives to parity.

Down the road (a few years?), RAIDz expansion will allow one to expand an existing RAIDz pool with additional drives. But, it's not available yet..

I recall that @Chris Moore said he frowns every time he reads people saying they'll "add disks later"
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Every use case is different. As such, the key question is whether the hardware can do what you expect it to for the time horizon of your choosing.

For example, I’m using the Asrock avoton c2750, which uses little power and the thing can handle 12 SATA drives. For my home use, that is perfectly adequate.

If you are set on using a different board but want to add more disks later, consider getting a motherboard with 2+ pcie slots. You can then use one for 10gbe (SFP+ or 10base-t), the other for a HBA.

One addresses the bottleneck that 1gbe interface imposes, the other deals with adding more disks later.
 

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
There's nothing inherently wrong with "adding disks later". For your use case, RAIDz2 is probably a good choice. If you had a server case that held 24 disks, one could start with a six disk RAIDz2 pool and add 3 additional vdev's of 6 disks in RAIDz2.

Six disks in RAIDz2 is one of the sweet spots for redundancy and overhead. Your case only holds 8 drives. You might want to consider going with an 8 drive RAIDz2 pool. But, you'll have to add all 8 drives up front ($$$). Starting with 4 drives in RAIDz2, you'll lose one-half of your storage to parity. When you add the next 4 disk vdev, you'll lose another 2 drives to parity. Starting with all 8 drives in a RAIDz2 volume, you only lose two drives to parity.
You make me realize that I totally forgot to talk about the pool layout I considered. To be honest, I haven't made up my mind yet. I was thinking about a four disks RaidZ1 vdev/pool but the possibility of a sector read failure during a potential resilvering makes me freak out. As an alternative, I thought about doing a pool composed of two vdev each containing a pair of mirrored disks. One may argue that a four disks RaidZ2 vdev may be more resilient though :confused:

Down the road (a few years?), RAIDz expansion will allow one to expand an existing RAIDz pool with additional drives. But, it's not available yet..
Yes, I know there's iXSystems/Delpix/FreeBSD dev working on that (forgot his name though). I'm sure he has a plan and will get it done but I'm not counting on it to plan my build.


For example, I’m using the Asrock avoton c2750, which uses little power and the thing can handle 12 SATA drives. For my home use, that is perfectly adequate.
Also known as "The on-board SATA Connectors Galore"! I am familiar with the Intel Atom C motherboards. They're great and I was really interested in using the AsRock C3558 board to build a NAS. Though their weak CPU power (an also the embedded aspect) made me change my mind in favor of a Pentium + Supermicro motherboard build.

If you are set on using a different board but want to add more disks later, consider getting a motherboard with 2+ pcie slots. You can then use one for 10gbe (SFP+ or 10base-t), the other for a HBA.

One addresses the bottleneck that 1gbe interface imposes, the other deals with adding more disks later.
Well, the Supermicro X11SSH-F has PCIe ports: 1 PCI-E 3.0 x8 (in x16), 1 PCI-E 3.0 x8 and 1 PCI-E 3.0 x4 (in x8). I'm not sure about what is recommended for an HBA or a 10GBe card but I would guess a x8 is enough isn't it ?
Nevertheless, none of my home equipment is compatible with 10GBe so I kind of ruled out that option for now. I thought I would first saturate my 1GBe network and think later.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yes, I know there's iXSystems/Delpix/FreeBSD dev working on that (forgot his name though).
Matt Ahrens. You know, one of the guys who invented ZFS?

I'm not sure about what is recommended for an HBA or a 10GBe card but I would guess a x8 is enough isn't it ?
Few things besides graphics cards need a x16 connection.
 

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
I wrap up this thread for good a month after I wrote it. I read more and my opinion about this build changed. If you were thinking about a similar build, you may be interested to know what I would change to achieve the goals mentioned above:
  • I would use a Supermicro X11SSM-F instead of the X11SSH-F. The main reason is the versatility.
  • The Pentium G4560 and the 16GB of RAM are pretty fine to stat with.
  • I would go with a cheap but reliable SATA SSD at first because I would not be buying 8 disks from the start.
  • The PSU seems pretty good. Here is an interesting review about it.
  • For the case, I would still go with a Fractal Design, probably more a Define R5 rather than a Node 804 but that's a matter of what fits best in your home.
As for the pool layout, I would start with a couple of mirrored disks and I would expand with another couple. I would probably then switch to a four-disks RaidZ2 vdev or keep adding another couple of mirrors before that.
The evolution of this configuration can be done in different directions : more RAM, a Xeon to replace the Pentium, a 10Gbe PCIe network card, a PCIe SSD or M.2 PCIe adapter and finally an HBA. The PSU may need to be upgraded to accommodate all this equipment though.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Few things besides graphics cards need a x16 connection.

And even those are okay with x8; a Titan loses only a couple frames at x8.

What needs x16 and PCIe 4 and PCIe 5 are 100Gb and 400Gb NICs. Not anything enthusiasts have to worry about. But just in case you wonder “why is PCI SIG pushing so hard on bandwidth” - think web scale data centers.

For home, 2x1Gb is plenty. If all you do is backup and steaming, 1x1Gb is plenty.

I like that Corsair SSD. Kinda makes me wish I’d have seen it a week ago, that sounds preferable to the Patriot I got.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Thanks for making me aware of that raidz expansion project. I thought I’d screwed up: raidz2 with 5 disks, half full now that it’s populated, and I was thinking “what do I do 2-3 years down the line, if this fills up?”
Now I can chill. I’ll just expand it at the time, if I ever need to.

@LimeCrusher, for your data drives, consider wd elements 4TB. When you chuck those, what’s inside are white labeled helium disks, running at 5400 instead of 7200 (less vibration is better in a small case and for home use).
Warranty stories vary from “they just exchanged them” to “they balked”, but at the cost of these, one can fail after a few years and you can replace it out of pocket and still be ahead.
Of course if you are a master at opening plastic cases carefully, you can always keep one case around for warranty purposes and RMA an elements drive, not the bare drive inside.

They come in 4, 6, 8 and 10TB. The 10TB is too expensive to be worth it; 4 through 8 scales linear in price with capacity.

Capacity-wise, you know what you need no doubt. In my home, 9TB went to a medium-smallish DVD / Blu-ray collection, with the few 4K HDR I have taking 50GB or so each. Another 3TB went to backup. That’ll likely grow to 4 or 5TB over the next 2 months, as the incrementals pile up.

That leaves me with about 7TB free. Enough for more Blu-ray’s than I am likely to buy, given that streaming is a thing.

Spouse is a pack rat though, so you never know, I might stare at a semi steady flow of disks being bought :).
 
Top