New Build - Sanity Check

Status
Not open for further replies.

rossthompson89

Dabbler
Joined
Oct 7, 2018
Messages
21
I am currently putting together a NAS build running FreeNAS and I’m looking for a sanity check/thoughts on my build so far. This is going to serve as storage for photography data. I'm constantly growing my data collection and thought this would be a fun project to build out a server and no longer have to worry about filling up external drives. My main workflow is going to be accessing data stored on the server via NFS for Lightroom and Photoshop. I don’t think this is going to demand any sort of crazy performance build.. So I did my best to build, what I think, is a solid machine for the money. My base right now is going to be 16TB, but the mobo that I chose has 6 sata slots, so will be easily expandable to 24TB if necessary. I’m open to suggestions and thoughts about this build and would appreciate any help I could get. I also posted this on reddit's r/freenas forum, so sorry for double dipping but just looking to get any help possible :)



Mobo - https://goo.gl/32cPdU

ASUS PRIME Micro ATX B250M-C/CSM LGA 1151 .. 6x SATA 6Gb/s USB 3.1



CPU - https://goo.gl/BmHNzn

I’m looking for something like this CPU … Doesn’t have to be exact. I just grabbed an example from ebay but not sure when I will pull the trigger on this purchase. My thoughts were a 5th, 6th or 7th gen 1151 i5 quad core for <= $150.



Heatsink - https://goo.gl/GcgyGX

80mm heatsink.. If I can fit it.. That's what, she said?



RAM - https://goo.gl/oNmTme

16GB (8x2) G.SKILL Aegis DDR4 2400



Drives - https://goo.gl/DDRU0f

12TB - 3x 4TB 7200 6GB/sec HGST Desktar hdd



PSU - https://goo.gl/Ab9gAr

SilverStone Technology 450W SFX Plus Bronze



Case - https://goo.gl/KXw88N

Silverstone Mini ATX 8x drive bays



Boot Drives - https://goo.gl/UL9TqM

Sandisk 16GB Ultrafit USB drives 130MB/s



RAID type - RaidZ2
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
Hello, and welcome to the forum :)

First of I would like to point you towards some threads, you should take a look at:
Hardware Requirements
http://www.freenas.org/hardware-requirements/
FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

There is also a few things that don't make sense to me:
1:
My base right now is going to be 16TB
First of you state that you need 16TB, then you list:
Drives - https://goo.gl/DDRU0f
12TB - 3x 4TB 7200 6GB/sec HGST Desktar hdd
Do you have some drives not listed? If not, there is no way to get 16TB out of 3x4TB drives

2:
RAID type - RaidZ2
Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
This will explain the needs when using RaidZ2.

3:
Case - https://goo.gl/KXw88N
Silverstone Mini ATX 8x drive bays
If the link is to the correct case, I think you would find it hard to fit the drives in it. This case supports only 2,5" as far as I know.

Have you considered going the used server grade gear way? You can get a hell of a system relatively cheap, if you so choose. Maby @Chris Moore can point you in the right direction.
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
Also, is it a requirement to go for the small form factor build? The tiny cases often offer a high WAF and a low footprint, but at the cost of cooling and possibility for expansion in the future.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
ASUS PRIME Micro ATX B250M-C/CSM LGA 1151 .. 6x SATA 6Gb/s USB 3.1
That is not a server board. Unsatisfactory. Look at the guide and try again:

FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

While you are at it, take a look at the rest of this also:

Hardware Requirements
http://www.freenas.org/hardware-requirements/

Did you read the manual?
http://doc.freenas.org/11/freenas.html

Updated Forum Rules 4/11/17
https://forums.freenas.org/index.php?threads/updated-forum-rules-4-11-17.45124/

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Maby @Chris Moore can point you in the right direction.
Since you say it so nicely, I will give suggestions...

Just a few weeks ago, I suggested something like this to someone else, but I couldn't find the post to point you at. Here is what I suggested for a tower build that is quiet and very inexpensive while still capable. If you like, you can use these as a guide:

CASE: Fractal-Design-Define-R5-FD-CA-DEF-R5-BK-Black-Silent-ATX-Midtower-Computer-Case - US $89.24
https://www.ebay.com/itm/253026336681

POWER: NIB Corsair CS-M Series CS550M 550W 80 Plus Modular Power Supply - US $69.00
https://www.ebay.com/itm/163279426704

System Board: Super Micro X9SCM-F Motherboard w/ Heatsink/Fan & I/O Shield - US $75.00
https://www.ebay.com/itm/192561781616

CPU: Intel Xeon E3-1230V2 3.30GHz Quad-Core CPU Processor SR0P4 LGA1155 - C737 - US $85.00
https://www.ebay.com/itm/283158542659

Memory: 8GB Memory RAM for SuperMicro X9 Series - - US $79.00 * 2 = $158
https://www.ebay.com/itm/163130855012

Drive Controller: LSI-SAS-9211-8i-8-port-6Gb-s-PCI-E-Internal-HBA-Both-Brackets-IT-MODE - US $59.99
https://www.ebay.com/itm/152937435505

Drive Cables: Mini SAS to 4-SATA SFF-8087 Multi-Lane Forward Breakout Internal Cable - - US $12.99
https://www.ebay.com/itm/371681252206

Thermal Compound: Noctua NT-H1 Thermal Paste Grease Conductive Compound for CPU/GPU - US $6.95
https://www.ebay.com/itm/302624513215

I may have missed some accessories, and I didn't include drives, but this should get you all the key components
PS. You need six drives if you are using 4 TB drives and you want 12 TB usable.
 

rossthompson89

Dabbler
Joined
Oct 7, 2018
Messages
21
@
There is also a few things that don't make sense to me:
1:

First of you state that you need 16TB, then you list:

Do you have some drives not listed? If not, there is no way to get 16TB out of 3x4TB drives
Whoops! That was a typo - It was to say 12TB. The drives/quantity I listed are correct.

If the link is to the correct case, I think you would find it hard to fit the drives in it. This case supports only 2,5" as far as I know.
Nice catch! You are right - that is just for 2.5" drives.

Have you considered going the used server grade gear way? You can get a hell of a system relatively cheap, if you so choose. Maby @Chris Moore can point you in the right direction.
I haven't really considered that but it's not a bad Idea. Chris posted some hardware links, I'll check those out. Thanks for the help!
 

rossthompson89

Dabbler
Joined
Oct 7, 2018
Messages
21
That is not a server board. Unsatisfactory. Look at the guide and try again:
Does it need to be a server board? I went with my choice because I wasn't able to find 1155 boards that have all 6GB/s SATA ports. I figured it would be a good idea not not limit myself to mostly 3GB/s drive performance and try to get something a little more recent. I'm no master here though so I'm curious to hear your thoughts there.

PS. You need six drives if you are using 4 TB drives and you want 12 TB usable.
I don't necessarily want 12TB usable and I understand that due to overhead the usable capacity != total drive capacity.


I will read through the documentation provided so I can get a better understanding of this. Thanks so much for all of the info!
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
Does it need to be a server board? I went with my choice because I wasn't able to find 1155 boards that have all 6GB/s SATA ports. I figured it would be a good idea not not limit myself to mostly 3GB/s drive performance and try to get something a little more recent. I'm no master here though so I'm curious to hear your thoughts there.
Unless you go for pure SSD I don't think you will get anywhere near to saturate that 3GB/s link.
And if you, later on, might want to switch to SSD's you can always add an HBA :)
 

rossthompson89

Dabbler
Joined
Oct 7, 2018
Messages
21
Unless you go for pure SSD I don't think you will get anywhere near to saturate that 3GB/s link.:)
And if you, later on, might want to switch to SSD's you can always add an HBA
Interesting .. I don't know too much about consumer disk performance / requirements.

If that is the case.. I actually have some components from an old desktop that I could use.
Mobo - EVGA P55 FTW LGA 1156 / https://goo.gl/A2Lgh8
CPU - Intel Core i7-860 Lynnfield Quad-Core 2.8 GHz LGA 1156 / https://goo.gl/oibB2d

Thoughts on using this? Would save me a nice chunk of change!
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
If you want something that works and you care about the safety of your data, then go for server grade. There is a reason that the server grade gear is recommended.

It could possibly work, but I would only try it as a proof of concept. I would not trust it with my data.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
@rossthompson89 Take a close look as what @Chris Moore said. Those are god parts and will be far more reliable for a server. As for your storage requirements, You need to plan at LEAST n+1 (RAIDz1). forget about "overhead" (TB != TiB), if you need 4TB of space, you would need 5 1TB drives. This allows any one disk to fail and you don't lose your data, you system stays running and all is right with the world. If you strip 4 1TB disks, if any one disk has corruption, that data is just gone. If a disk fails, ALL data is gone and your system, for all intents and purposes, goes down. Most people here running disks larger than 3TB use RAIDz2 (n-2) or n/2 (striped mirrors like RAID 10).
 

rossthompson89

Dabbler
Joined
Oct 7, 2018
Messages
21
If you want something that works and you care about the safety of your data, then go for server grade. There is a reason that the server grade gear is recommended.
It could possibly work, but I would only try it as a proof of concept. I would not trust it with my data.
Got it! I wouldn't want to put my important data at risk, so reliability is definitely the way to go. I'll scrap the idea of using desktop components and go with something more server oriented.

@rossthompson89 Take a close look as what @Chris Moore said. Those are god parts and will be far more reliable for a server. As for your storage requirements, You need to plan at LEAST n+1 (RAIDz1). forget about "overhead" (TB != TiB), if you need 4TB of space, you would need 5 1TB drives. This allows any one disk to fail and you don't lose your data, you system stays running and all is right with the world. If you strip 4 1TB disks, if any one disk has corruption, that data is just gone. If a disk fails, ALL data is gone and your system, for all intents and purposes, goes down. Most people here running disks larger than 3TB use RAIDz2 (n-2) or n/2 (striped mirrors like RAID 10).
Agreed, @Chris Moore suggested some great parts. I'm reviewing all of this information and figuring out what is going to work best for me. Thanks again for that Chris!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Does it need to be a server board?
There have been many builds done with boards that would not be considered server grade and some of those builds have lasted for many years. I sometimes come off as elitist with regard to the issue of using server boards, but there is a reason for it, several actually. The first being that server boards, despite the hype around gaming boards, are usually made with components that are more durable. I have, at work, systems that have been running 27/7 for 12 years and have had not been powered down for any reason other than facility maintenance that involved grid interconnect changes where the entire building was down. That is unusual for us because we have UPS systems and generator systems that would bridge any grid down situation. We have hundreds of gallons of diesel fuel on premises at each building and the place I work (a multi building complex) even has a central fuel tank with thousands of gallons of fuel and a fuel truck that can take fuel around to resupply any of the buildings that power has not been restored to. After Hurricane Katrina, for example, power was out for weeks but our facility was never down, even though we were directly in the path of the storm.
Sorry, I digress. The point was, server components are designed to run continuously with no down time. Many commercial facilities that run quantities of servers replace them after a certain amount of time regardless of the equipment condition. In massive installations, they are looking to squeeze out the best performance for the least amount of cost and there is a point where the total cost of a new system is less than the 'theoretical' cost of keeping an old system.
This video gives a little rundown on how the math works for that, if you are interested:
https://www.youtube.com/watch?v=tWE0g4zQeFs
Sorry. I keep getting distracted. It has been about three hours that I have been working on this post between doing other thing.
In the end, many of the enterprise server components that end up on the secondary market, like eBay, have years of service life left in them. You can build a very capable system that can easily last four or five years (or more) and give you server features that you will never see in a gaming / desktop system, such as IPMI remote management. This is the second big feature besides build quality that makes server gear (some of it) worth having over anything else. I have four servers running in my home network right now and I don't need a KVM (Keyboard, Video, Mouse) device because I am able to remotely connect to those systems from an app on my desktop that lets me see the display of the server remotely over the network. This has allowed me to configure the last six servers I have used at home without ever connecting anything other than a network cable and power to them. They live in the corner of my office, but I don't need to directly interact with them because they are on the network. It is a nice feature, and the default is that the IPMI network adapter (a separate adapter) will get the IP address from DHCP on your network. Sometimes, when you buy a used system board, the IPMI will be set to a static address and you will need to manually change it, you can see how in this video:
https://www.youtube.com/watch?v=gTRukUg1WLc
Here is another video that shows (very briefly) how to use the IPMI for remote management:
https://www.youtube.com/watch?v=1LpaIL-QSCo
Supermicro has been building that into certain models for a while and it had improved a bit over time, but the X9 series boards work well enough and I have used them at home and at work the newer X10 and X11 series server boards. It allows you to mount an ISO image on your computer to install the operating system from, so there is no need to create installation media when it comes time to install your OS.
I have to get going now, but I may share more later.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. The fastest mechanical drives I am aware of are not as fast as SATA 2. The SATA 3 introduction was to support the speed of SSDs and is largely unnecessary with the exception of the drive size problem that some older controllers suffer from.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Drive costs dwarf system costs in most cases, and drives don't generally last as long. You should be able to get at least two lifespans of drives on the same underpinning hardware if planned right.

Storage calculations should start with how much you are storing now, your expected growth rate during the life of the hardware, and as a general rule of thumb either double it or buy supporting hardware that will allow you to double it in that lifespan.

So as an example:
Currently stored: 4TB
Growth: 1TB per year
Expected Lifespan of hardware: 3 years (HDD warranty length of new drives)
Double: 14TB usable space needed

Now you look at your Raid calculations.
RaidZ2 is standard for lots of reasons, I think Chris gave you some links above that covers why.
RaidZ2 has 2 drives worth of space as parity (spread across the drives, allowing you to lose any two drives and not lose data.) which works out to space for RaidZ2= (n drives -2) * drive size
6 x4TB = 16TB usable, less after formatting.

That doesn't take into account FreeNAS requirements of don't fill your pool over 80% for RaidZ2 (over 80% your pool takes a significant performance hit, over 90% it falls off a cliff, 100% doesn't function at all)

16TB X 80% = 12.8TB usable, not quite the 14TB calculated to need but close enough.

The RaidZ2 calculation is straight math, the FreeNAS calculation is a fundamental function of the software, so no questions on those. What always gets people is the growth factors. People seldom if ever properly figure growth rates, mostly due to data storage growth in this day and age isn't linear. The factors involved in expanding your pool by changing your hardware to allow the addition of a second vdev or replacing all your drives with larger ones is not trivial, takes lots of time, and significant money. You don't want to be a Network Storage professional, you want to take pictures and know they are safe from loss or damage. Then you add in support for slipping replacement schedules i.e. if you are busy or cash poor when it comes time to replace drives you have space available to delay without impacting your system usage. I could go on for a long time why all the reasons why you want to build this in to your calculations, the only real debatable question is how much you add for thumb factor not if you should have one.

The other way to get the doubling is hardware that supports adding a second vdev, so a 12 bay rack when you only need 6 drives by the calculations. Then you can start with a smaller set of drives knowing your hardware supports expansion.

Final set of calculations (your personal value calculations):
Importance of data vs. money
Importance of time spent messing with your system
Importance of stress free functioning

I'm willing to spend quite a bit of money getting the system setup and invest all the time needed at the beginning with drive testing, system testing, setting up smart tests, scrubs, scripts, email alerts, etc. as I care more about data safety, a system that doesn't require baby sitting, and I never want to stress about it. The truly important data is also on a second set of drives and burned to archival Blu-ray.

Anyway, I apologise for the rambling, it's early and I've not had my coffee yet.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Anyway, I apologise for the rambling, it's early and I've not had my coffee yet.
This was a good ramble. I hope you will contribute more. It sounds like you have done your homework.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
This was a good ramble. I hope you will contribute more. It sounds like you have done your homework.
Thank you. I've been using FreeNAS/ESXi casually at home for... ugh almost 10 years now? And like you I do this stuff for a day job, once you are above Level 1/2 Tech 90% of IT is projects and planning.
 

rossthompson89

Dabbler
Joined
Oct 7, 2018
Messages
21
There have been many builds done with boards that would not be considered server grade and some of those builds have lasted for many years. I sometimes come off as elitist with regard to the issue of using server boards, but there is a reason for it, several actually. The first being that server boards, despite the hype around gaming boards, are usually made with components that are more durable. .....
Chris, that makes total sense! I agree. I was trying to verify whether or not there was some sort of hard limitation out there for server grade equipment. I have yet to have enough time to sit down and really pick through those part suggestions you provided, but at a quick glance they look good. Question though - Is the reason for xeon over i5/7 in order to support ECC? I appreciate all the info.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

rossthompson89

Dabbler
Joined
Oct 7, 2018
Messages
21
Drive costs dwarf system costs in most cases, and drives don't generally last as long. You should be able to get at least two lifespans of drives on the same underpinning hardware if planned right.

Storage calculations should start with how much you are storing now, your expected growth rate during the life of the hardware, and as a general rule of thumb either double it or buy supporting hardware that will allow you to double it in that lifespan......

This is solid, solid information! Thanks a lot for all of this. I definitely have to do a little bit more thinking on the capacity here..

Anyway, I apologise for the rambling, it's early and I've not had my coffee yet.

This is before coffee?! I'd have love to hear that after coffee! haha
 
Status
Not open for further replies.
Top