Decided on the Dark Side and I want to build a 120TB Monster and WOW, do I have questions.

DeltaOscarMike

Dabbler
Joined
Aug 3, 2017
Messages
11
My Current setup is unRAID with 60TB and of Data and Two Drives for Parity and started to have stability issues and unsure to how to diagnose properly. I have no way retrieving logs when it locks on me. Every few days, I see myself restarting my dockers and rebooting my unRAID. So I decided to look around for alternatives. Rockstor seemed to be nice choice but FreeNAS seems to come out on top every time.

My Goal: Build a file server alone and I was going to use another machine as my workhorse for Plex & Dockers.

Wish List
Case: Norco RPC-4224 4U
Mobo: Need recommendation (ITX or Micro ATX)
RAM: Unsure how much is needed ( I want to populate 120tb in this sever)
HDD: Will be going with 10TB Drives, either WD Reds or Seagate's Ironwolf series (Need Recommendation).
HBA: Intel RAID Twenty-four port Expander Card RES2SV240
Network: 10Gbe
 
Last edited by a moderator:

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
120TB is a LOT of storage! How important is the data that will be stored on this FreeNAS? How will you back up this data?

How important is performance? How many users will access the fire server at one time?

Have you considered how may volumes that you will create?

Sorry for all the questions but it is important to fully understand the use case. I have much less experience using FreeNAS so hopefully others will weigh in.

Don't overlook the excellent documentation that other forum members have created on FreeNAS, especially hardware recommendations and volume guidance.
 

DeltaOscarMike

Dabbler
Joined
Aug 3, 2017
Messages
11
The Data is mostly movies which is a back up from my personal collection and the rest is Data Backup for other pc's on the network and I was going to use cloud storage for the PC backups or turn the old unraid server as back up solution. As for the Movies, I'm not concerned about losing them since these are backup from the original Blu Rays that I own. If I lose Movies, then it's not a big deal in replacing them.

Right now on average I have about 4 people any given night streaming from me on my plex. I'm not worried about people streaming off the new server because I will have separate machine that will have the horsepower to do the streaming.

As for my volumes, I was thinking about either creating just straight Mirror Volumes with two disks in a Vdev or having 6 Disks in Vdev with RAIDZ3 (two parity drives).
 

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
Understand. This helps to know more about your specific use of the FreeNAS server.

While I don't have personal experience with it, I know that the FreeNAS server can act as a Plex server. This does increase the hardware requirements somewhat over simply acting as a simple file server, so it is good that you mentioned this.

I would encourage you to review the documents on the Resources tab of the forums.freenas.org web-page. There are some very good documents about recommended hardware and proper FreeNAS configuration. FreeNAS has some unique recommendations and requirements. If you follow these you should end up with a very effective and reliable storage solution.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Norco chassis are ok but super micro are better if you can handle the noise. With the Norco you will need 24sata cables, with the right supermicro you get 1 sas cable. Much nicer to have just one thing to go wrong not 24.

For motherboard get a full size at, no clue why you would ever bother with an itx in a server build. X11 is the new hot platform and supports up to 64GB of memory. Speaking of memory, you're going to want minimum 64GB. You can also jump up to a Intel e5-1620 and have 128GB memory if you really want good performance or run multiple services.

I don't care what HDD you use. Just burn them in for 72h. You seem like you want 12 drive right? That would give you 120TB raw and if you used raidz2 with 2vdevs you would get ~72TB usable and ~60TB to stay under the 80% warning freenas will give you. You ok with those numbers?

I would suggest going with 16 drives in two raid z2 vdevs and expand to 3 vdevs if you need more storage.

What about power supply? Your going to need 1000w+ if you plan on coming that 24 drive case.

For 10gig nic do you want SFP+ or rj45? Basically Intel x540 vs chelsio.

Read all the links in my signature for more information.

Sent from my Nexus 5X using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Norco you will need 24sata cables

Not quite true. The Norco uses 6 passive backplanes with an SFF-8087 connector per backplane.

So six Mini SAS cables. Ie one LSI HBA and the intel expander, or two LSI HBAs plus 2 reverse breakout cables from 8 ports in the mobo (my current setup)

Here is my Norco-4224 build:
https://forums.freenas.org/index.ph...24-supermicro-x10-sri-f-xeon-e5-1650v4.46262/

I'd suggest6-way Raidz2 vdevs if performance is important. Or 8-way Raidz2 if storage efficiency is more important.

For motherboard get a full size at, no clue why you would ever bother with an itx in a server build. X11 is the new hot platform and supports up to 64GB of memory. Speaking of memory, you're going to want minimum 64GB. You can also jump up to a Intel e5-1620 and have 128GB memory if you really want good performance or run multiple services.

For a Monster I'd suggest a SuperMicro x10 lga2011 board. With either an e5-1620 or e5-1650v4.

Start with 32GB DIMMS and you can grow to 256GB.

Would probably recommend 64GB as a start.

What about power supply? Your going to need 1000w+ if you plan on coming that 24 drive case.

Corsair RM1000x is a great choice and comes with the right amount of peripheral headers and a 10 year warranty.

PS: use the Industrual 3000rpm 120mm Noctuas.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,462
Why do you have two usb drives as your boot drives. How did you set that up?
Why is because USB flash drives fail quite frequently. How is simply to install to both flash drives at once, or add the second one as a mirror through the web GUI.
 

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
Some people use one or two SSD drives for for the boot drives since the SSD drives tend to be a lot more reliable. Using SSD drives could be considered a bit of a waste though as the boot drives are solely used for the operating system and not very much storage space is used or required. If I understand correctly the boot drives have little if any impact on performance, unlike storage drives. If you use USB flash drives you should use 8 GB or larger drives. It is slightly easier to begin with two boot drive initially.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,462
If I understand correctly the boot drives have little if any impact on performance, unlike storage drives.
That's correct; most of the OS lives in RAM most of the time. Bootup will be a little faster off an SSD, and updates will be significantly faster. But the real difference is in reliability; USB flash drives are crap in that regard, while SSDs tend to do much better.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Pretty much.

It's working well for me at the moment. I have two Sata ports available and wanted to use those for L2arc or SLOG etc.

Or boot. But the USBs are working so well i never bothered.

Am planning to convert to an ESXI AIO config soonish and will use a boot SSD then.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Using SSD drives could be considered a bit of a waste though as the boot drives are solely used for the operating system and not very much storage space is used or required.
It is frequently perceived as a waste, but it's not. The extra space provides more unused blocks for wear leveling and extra boot environments. The added reliability is worth it alone.
 
Joined
Jul 18, 2017
Messages
9
You could go my ssd route: buy a cheap small msata ssd off eBay for 10 bucks and an adapter cable. Spent 20 bucks all in. It's only 24gb, but that's plenty for now...

Also. The ssd I received seemed dead on arrival. There's a flash trick that involves two cycles of plugging only the power leg in for 30 minutes and then disconnecting. After two, my drive was "refreshed". Seems to be a fairly common issue with used drives. It was easy to fix.
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
I booted off mirrored USB flash drives for several years, and averaged one failed drive per year. Annoying, but not a showstopper. I recently switched to an SSD when I decided to do a fresh install of FreeNAS 11.0u2. The SSD is definitely a more pleasant experience.

I considered buying an SSD off eBay as mentioned in the previous post, but at the end of the day I ended up purchasing an inexpensive 120GB SSD that was on sale. While my data was never at risk with the previous configuration, I just got tired of "messing around" with boot devices. I want to focus my time on other things and for the amount of time and money I have invested in my NAS, saving a few dollars on a boot device is not worth it to me.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
@DeltaOscarMike (aka. DOM)

First of all, for the system you are shooting for, get a small single SSD as the boot device, you will never know how happy it will make you since you won't have to put up with failing USB Flash drives at the worst possible moment (the way things always happen).

Second, I like the X11SSM-F Supermicro motherboard. It can hold 64GB ECC RAM if you need it. Due to your use case I would recommend you start with 32GB RAM (two 16GB DIMM). There are other Supermicro motherboards which offer more SATA ports. Since speed is not what you desire, you could grab a full size tower and a motherboard with all the SATA ports you need (14 ports or more) and you should be happy. No need for an extra RAID card. Just something to think about.

RAIDZ3 = 3 parity drives. Since you are looking at 10TB drives, I think this is prudent to use RAIDZ3. This would give you 65TB of usable storage if you went with twelve 10TB drives in a RAIDZ3 configuration, about half the raw capacity. Doesn't sound fair does it. If you choose RAIDZ2 then you get 73TB of usable data. I'm subtracting 20% to keep the pool healthy and overhead for ZFS. @SweetAndLow already pointed out these number above, I'm just saying it again.

Other things to think about... If you have non-important data such as movies then you could build up a vdev RAIDZ1 for your movies and a vdev RAIDZ2 or Mirror for your important data. The downside with such large drives is the cost. If you only needed 10TB for backups and important data then that is two drives in a mirror, or a 3 way mirror if you want more safety. This leaves 9 drives in a RAIDZ1 = 72TB - 20% = 58TB for movies.

If you really want a 120TB monster then you are going to have to think bigger. Two vdevs of twelve 10TB drives in RAIDZ3 = 130TB of usable storage. The cost of 24 new WD Red hard drives is over $8000. For me it's cheaper to insert the Bluray discs into the player and if any of your friends want to watch a movie, have them buy their own. You are looking to spend over $10,000 on a system this big.

So I hope I've pointed out some pitfalls for you with respect to storage.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
I agree with Joe. He has some very good point.

In regards to disk drives...

Look at the following (http://www.newegg.com)
WD Gold Enterprise 10GB (7200 RPM, 5 year warranty, etc.) @ $414.99
WD Red 10Gb (5400 RPM, 3 year warranty, etc.) @ $378.99

+$36 gives you faster drive, longer warranty, much better built, IMHO. If you will have 10+ drives in a chassis, then the Gold is by far better way to go.

I would say the same for Seagate drives. Seagate drives are even cheaper and the Enterprise is He gas filled.

If your vdev is full, in a perfect world, it will take a minimum of over 11 hours to rebuild your vdev. During that time performance will suck. Same for a scrub.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I agree with Joe. He has some very good point.

In regards to disk drives...

Look at the following (http://www.newegg.com)
WD Gold Enterprise 10GB (7200 RPM, 5 year warranty, etc.) @ $414.99
WD Red 10Gb (5400 RPM, 3 year warranty, etc.) @ $378.99

+$36 gives you faster drive, longer warranty, much better built, IMHO. If you will have 10+ drives in a chassis, then the Gold is by far better way to go.

I would say the same for Seagate drives. Seagate drives are even cheaper and the Enterprise is He gas filled.

If your vdev is full, in a perfect world, it will take a minimum of over 11 hours to rebuild your vdev. During that time performance will suck. Same for a scrub.
I would not go with 7200 rpm because of the increased power usage and heat. I have a hard time keeping 16x 5400rpm drives cool in my supermicro chassie.

Sent from my Nexus 5X using Tapatalk
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
I would not go with 7200 rpm because of the increased power usage and heat. I have a hard time keeping 16x 5400rpm drives cool in my supermicro chassie.

The power usage is actually not that much different.

WD Red is 5.7/2.8 and for WD Enterprise it is 7.1/5.0

The quality difference makes it worth it for me. Fewer failures, longer life and better warranty are hard to beat when using large drives (>8GB)
 

DeltaOscarMike

Dabbler
Joined
Aug 3, 2017
Messages
11
I appreciate the input for my new NAS. I need to know the why it wouldn't be a good idea on why I couldn't set up 2 drives in mirrored vdev and have 12 Vdev's to accoomplish 120TB.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
What happens if you lose 2 drives on the same vdev?

In a Z2 or Z3 you can lose ANY Z2 or Z3 drives and still be running...

Redundancy is the key...
 
Top