If you were to build a 50TB FIlesharing NAS, what specs will you choose?

Joined
Jul 1, 2019
Messages
8
Hi Guys, The school I am currently working needs to build a custom FreeNAS for smb filesharing with 10TB usable space.
This build should be scalable to 50TB in future. Our budget for this build is ($1000 - $1300 including HDD's.)
For the NIC can etherchannel, we got plenty of freeports on the collapsed core switch. Got upto 4 x 4 x 10 Gigabit SFP+ and lots of 1gig ports.
If you were to build this what specs/hardware will you choose? What raid option will be a good choise?
Kindly please help, this is going to be my first build.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Sounds to me like you should consider used server hardware to stretch that budget. I'd consider the various resource guides here. For example, Chris Moore put together an inexpensive system with lots of expansion capacity but a low entry price point. I'd marry that with a compatible server chassis from Supermicro, such as CSE-846, also purchased used. You may need a HBA or two to connect all the drives. Speaking of which, I'd go for used server-grade HDDs as well.

You don't mention what you want to use this system for, etc. so my initial suggestion would be a 6-disk Z2 array with 4 drives for data and 2 for parity, giving about 12TB of usable space with 4TB drives and 9TB with 3-TB drives (see the ZFS calculator - FreeNAS performs best when the pool is filled less than 80%). The CSE-846 would accommodate up to 24 drives, allowing up to 4 VDEVs consisting of 6 HDDs each. With 4TB drives, you'd max out at 49TB of usable capacity (assuming 80% fill), while with 3TB drives you'd max out at about 37TB of usable capacity.

While all the drives in a specific VDEV should have the same capacity, each VDEV in a pool does not have to be. So, your initial pool could consist of a 6-drive Z2 'starter' VDEV with 3TB drives (giving you an initial 9TB usable capacity), to which you can later add another Z2, 6-drive VDEV consisting of 4TB or larger drives, bumping usable pool capacity accordingly. I'd keep each VDEV you add to a pool similar in structure to the other ones already in the pool (i.e. in this example, 6-drive Z2) for simplicity and safety.

Having a lot of spinning drives does consume a lot of power and there are tradeoffs re: number of VDEVs vs. IOPS, etc. See FarmerPling2's excellent drive resource... to help with cost / power / etc. comparisons. I have a single Z3 VDEV in my pool here because my server merely deals with a few users and power consumption was a factor in my decision-making.

Anyhow, GoHarddrive has some 4TB models from HGST for about $80, w/a 1-year warranty. The 3TB drives I bought years ago are now available for $60 and feature a 3-year warranty. I'd stick to drives built for NAS use; I had no issues other than some infant-mortality with my old HGST 3TB drives. The 3TB models I used ran pretty hot, however.

Ebay offers the CSE-846 chassis for as little as $400-450 delivered, some of them look brand new with minimal damage. The first Z2 VDEV hard drive set (6x3TB drives) would be another $380, the server motherboard, CPU that Chris mentioned would weigh in at another $500 after RAM, HBA, boot drives, cables, etc. (cables may or may not be included in the CSE-846 chassis). So barely in budget, all used equipment, but with the room you need to grow. I'd also always buy a spare HDD, stress-test it to make sure it's OK, and then set it aside until the day you need it.

Other niceties to explore (as budgets allow and use patterns suggest) are
  • Using two SSDs for a mirrored boot pool. The CSE-843Bxxxx series of chassis apparently can have this dual hot-swap 2.5" drive set retrofit into them. Always refer to the Supermicro site once you have an exact chassis SKU from ebay to see what drive kit is compatible.
  • Using a 10Gbe network card. FreeNAS has very good network drivers for 10Gbe Chelsio-based cards (see resource guides).
  • Two powerful UPSs, one for each power supply, to allow graceful shutdowns if a power failure hits.
  • SSDs for L2ARC and a SLOG. However, I'd leave that task aside until you have some actual-use experience and can weigh the benefits and drawbacks of either based on your usage patterns and user feedback.
  • Upgrading the power supplies to more efficient models (if the ones you get are basic). Supermicro power supplies can be ludicrously cheap on ebay.
PS: My experience with goharddrive.com has been excellent - i.e. no questions asked if a drive fails a SMART test, a prepaid shipping label is sent to you electronically and a new drive dispatched ASAP. This is a better warranty experience than I received from OEMs. That said, another community member swore off goharddrive.com after receiving several dud HDDs.

PSS: Regardless of how amazing FreeNAS is, how robust the hardware is, etc. never forget to include/implement a solid backup plan. Given your environment, online cloud services may be the best option.

PSSS: Don't forget to fill "empty" HD caddy slots in the CSE-846 with styrofoam blocks that are approximately as wide and as high as a HDD to ensure that every caddy get the same air flow. That's helpful re: cooling for the HDDs! :)
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You're going to spend your entire budget on just drives. 4 10TB drives will be 1k and get you 14TiB usable space after you take into account 80% rule and TB/TiB conversions.

You can pick up a used sever for $300 but you will have to hunt for that and definitely be over budget after tax and shipping.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
FWIW, I just stumbled across a ebay listing that SweetAndLow referenced elsewhere in this section. It's based on the CSE-846 chassis I mentioned above and includes the motherboard (a dual socket X9DRi-F), 2 Xeon CPUs, the power supplies, backplane, HBA, HDD caddies, etc; all for ~$450 delivered.

You'd have to add RAM, hard drives, screws for the hard drives, and boot drive(s) but everything else seems to be there. There are plenty of PCIe card slots for a 10Gbe NIC, space for the dual-2.5" drive holder (if you want to spring for it), and enough budget left over to allow you to buy 4+TB hard drives from the outset.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Yeah 6x4tb in raidz2 would get you 11TiB usable space for $700. That might be a good way to go if you get a 24bay chassie. This would allow for expansion.
 
Joined
Jul 1, 2019
Messages
8
FWIW, I just stumbled across a ebay listing that SweetAndLow referenced elsewhere in this section. It's based on the CSE-846 chassis I mentioned above and includes the motherboard (a dual socket X9DRi-F), 2 Xeon CPUs, the power supplies, backplane, HBA, HDD caddies, etc; all for ~$450 delivered.

You'd have to add RAM, hard drives, screws for the hard drives, and boot drive(s) but everything else seems to be there. There are plenty of PCIe card slots for a 10Gbe NIC, space for the dual-2.5" drive holder (if you want to spring for it), and enough budget left over to allow you to buy 4+TB hard drives from the outset.

Thank you so much for your input. I really appreciate your help and the tips. And sorry for the late response.
I have checked the eBay listing however I am not from USA and I would like a seller that ships internationally. I have came across this listing which ships to me :
https://www.ebay.com/itm/UXS-Server...501985?hash=item2141998961:g:9tcAAOSwCn5bJAhx

It cost $968 including shipping and got 18TB HDD.

Please let me know your thoughts. and I really appreciate your help.

PS: The build will be used for AD file sharing only. Where teachers mainly will store documents and videos.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
If possible, I would look more locally (Ie within your geographic region) to keep your shipping costs reasonable. For example, set up an eBay alert for the Chassis in your area of the world.

At first glance, I am not a big fan of the CSE-826 in your use case. All the slots will be filled with 2TB drives, which gets you over the 10TB initial requirement but then forces you into a bit of a corner when the time comes to upgrade - your best bet would be either to
-replace drives individually, letting the system resilver each time (each VDEV will reflect the new capacity once ALL drives in it have been replaced); or
-buy an additional external chassis that’s just there to hold additional drives (more $$$) and use a external SAS connection between your 826 and the drives in that chassis.

The drive replacement process described above is well documented and supported, just follow he process described in the manual. Just be sure to have the time needed and understand the risks / benefits.

One thing I don’t like about the 826 series is that the PCIe slots are typically not full height. Some models allow you to mount a limited number of full sized cards sideways. That limits you regarding HBA’s, NICs, and so on. You either
-only get to use limited-height cards
-get to use a limited number of full-height cards.
I’d love for someone with actual CSE-826 experience to chime in regarding how big a deal that is or if it’s a trivial issue. I just worry that it might add a premium to whatever add-on card you seek to use / buy later.

All that aside, it really comes down to the use case. If this is going to be a busy system (lots of users and/or lots of I/o stuff happening) then having the 4 VDEVs that a CSE-846 enables might be very helpful (more IOPS). If this will be more of a backup server with largely static data, then the 2 VDEVs the 826 enables should be perfectly adequate.
 
Joined
Jul 1, 2019
Messages
8
If possible, I would look more locally (Ie within your geographic region) to keep your shipping costs reasonable. For example, set up an eBay alert for the Chassis in your area of the world.

At first glance, I am not a big fan of the CSE-826 in your use case. All the slots will be filled with 2TB drives, which gets you over the 10TB initial requirement but then forces you into a bit of a corner when the time comes to upgrade - your best bet would be either to
-replace drives individually, letting the system resilver each time (each VDEV will reflect the new capacity once ALL drives in it have been replaced); or
-buy an additional external chassis that’s just there to hold additional drives (more $$$) and use a external SAS connection between your 826 and the drives in that chassis.

The drive replacement process described above is well documented and supported, just follow he process described in the manual. Just be sure to have the time needed and understand the risks / benefits.

One thing I don’t like about the 826 series is that the PCIe slots are typically not full height. Some models allow you to mount a limited number of full sized cards sideways. That limits you regarding HBA’s, NICs, and so on. You either
-only get to use limited-height cards
-get to use a limited number of full-height cards.
I’d love for someone with actual CSE-826 experience to chime in regarding how big a deal that is or if it’s a trivial issue. I just worry that it might add a premium to whatever add-on card you seek to use / buy later.

All that aside, it really comes down to the use case. If this is going to be a busy system (lots of users and/or lots of I/o stuff happening) then having the 4 VDEVs that a CSE-846 enables might be very helpful (more IOPS). If this will be more of a backup server with largely static data, then the 2 VDEVs the 826 enables should be perfectly adequate.
You got some solid points here. I will go with CSE-846 you linked over eBay and goharddrive 4TBs and will get myus.com to get the items forwarded to me to Maldives.
Also will get 2 of these ssd drives for a mirrored boot pool. With that I guess I am left with RAM and 10G NIC left. Can you help choose 32GB of Ram and an NIC. As long as it ships to US I can get it through myus. If you have any other recommendations let us know.
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I’d start with the RAM spec that Supermicro lists and search for the approved OEM SKUs to the letter. Either the SM SKU or the OEM SKU. But they have to match whatever you’re buying.

I had nasty POST issues with RAM sticks in my present system related to the “die generation” of the memory chips (the SKU was off by 1 digit). Now that the RAM SKU in my system exactly matches the approved list SKU in supermicros motherboard page, all is well.

So have a look at the supermicro site for your exact motherboard and go from there. I’d opt for 32GB or 64GB initially (per CPU), with a preference for larger-capacity sticks, if possible (16GB). That way, you’ll have spare RAM slots left over to fill in the future, should you need more RAM. Only go for ECC RAM.

As far as the NIC goes, I’d look at those in the recommended hardware lists (see resources), I believe they list NICs as well.

From what I’ve read here, Chelsio is very well supported. For example, my private-labeled iXSystems NIC for the mini XL was a chelsio 520. I’d go for a model that features a SFP+ port or two as that gives you the flexibility to use a optical transceiver, copper transceiver, or a twinax DAC to connect to your machine later.

I don’t know what you use as a switch but if you have SFP+ ports in a switch available, I’d start by using a twinax cable, as long as the Server and the switch are co-located. These cables are very inexpensive but you have to ensure that the hardware you’re plugging them into will accept them (some vendors use vendor-Id locks so you can only plug in “approved” stuff).

In next order of preference, I’d go optical. This avoids the uncertainty around vendor locks and hence may be your best bet given long shipping distances and minimal extra cost.

To avoid issues with vendor locks, buy a used transceiver for the right Frequency, fiber, and connector Type that matches the make of the equipment you’re plugging into. (Ie a chelsio-labeled transceiver for a chelsio NIC, a Cisco-labeled transceiver for a cisco switch, and so on)

The fiber type is the first decision to make. For small distances (<240m), 850nm OM3 Multi-Mode works great and can be used for up to 40Gbe. The connector type is almost always LC type.

Let’s assume you have a chelsio NIC in your server and a Mikrotik switch, are going for 850nm multimode.
-A compatible 10Gbe chelsio optical transceiver on eBay is about $15. (Search for 260-0012)
-a mikrotik Branded 10Gbe optical transceiver on eBay is $19 (search for “S+85DLC” with quotation marks)
-a 6m patch OM3 cable is another $8 (search for “fiber cable, OM3,LC” and buy whatever length you need) both ends need to be LC! No vendor locks apply to the fiber, thank goodness!

I’d always buy a fabricated optical patch cable, if possible. Making your own is always more expensive and you need very expensive tools and some time to get it done (been there, done that). If you have extra fiber cable, simply loop it with s generous radius. But the nice thing about the Internet is that you can buy prefabricated fiber in almost every length imaginable.

My least favorite connection type is copper due to heat and cost. If you go that route, be sure to use Cat6 or better wiring and to keep the distances as short as possible. Also buy prefab cables from quality vendors where possible. However, making your own is a lot less difficult and expensive than on the optical side.
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Had a look at your proposed board. It accepts a lot of different types of memory and I’d go for registered ECC type a that will allow up to 512GB memory to be used.

1600Mhz, registered DDR3 ECC memory can be had used on eBay for about $25 a stick, so $100 total to fill two slots per CPU (32GB ea, 64GB total). I searched for samsung m393b2g70bh0-ck0 based on the tested list published at supermicro for that motherboard (go to site, find board, select “tested memory list” to right of motherboard image, then select “DDR3-1600 registered ECC” from pull down Menu, then select 16GB tab)

However, it is up to you to find the exact right match!
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
@Constantin you mentioned the use of ebay HDD what was your xp with the 3 TB?
-were they used with alot of run hours?
-how long did they last?

i was thinking about getting one to stagger my pool, but i wouldnt feel confident getting a drive from ebay that has 40k runtime on it allready. Not to mention the 30day return buyer pays shipping....
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Not eBay HDD. My experience is limited to goharddrive.com or buying new on amazon/ Best Buy/ etc. Where did i mention buying HDDs on eBay?

I am retiring all of the 3TB HGST drives presently, IIRC, only 1 suffered a SMART error and was replaced early on. But I’d have on check to be sure. Other makes failed more often. I had the same experience with my current HE10 drives. One out of 10 developed SMART errors early on and was replaced w/o issues. Hence my recommendation to have a spare drive while the RMA process plays out.

Not sure what the drive hours were. I recall the used drives being older than 3 years, however. They ran w/o issues for two years after. So I guess the research institute I donated them to still has warranty from goharddrive for some of them.

However, the failure rate on those HGSTs is so low, they likely will become obsolete before they die.

Iirc, goharddrive.com pays for domestic return shipping and replacement shipping when it comes to warranty replacements but their web site doesn’t mention this specifically. Perhaps I’m misremembering?
 
Last edited:
Joined
Jul 1, 2019
Messages
8
Not eBay HDD. My experience is limited to goharddrive.com or buying new on amazon/ Best Buy/ etc.

I am retiring all of the 3TB HGST drives presently, IIRC, only 1 suffered a SMART error and was replaced early on. But I’d have on check to be sure. Other makes failed more often. I had the same experience with my current HE10 drives. One out of 10 developed SMART errors early on and was replaced w/o issues. Hence my recommendation to have a spare drive while the RMA process plays out.

Not sure what the drive hours were. I recall the used drives being older than 3 years, however. They ran w/o issues for two years after. So I guess the research institute I donated them to still has warranty from goharddrive for some of them.

However, the failure rate on those HGSTs is so low, they likely will become obsolete befor they die.
Mind telling us, what are you shifting to from HGSRs?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Yup. From their shop. Not via eBay.

eBay just adds another cost, just like amazon.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Mind telling us, what are you shifting to from HGSTs?
Still using HGST in NAS! Just 10TB helium vs. 3TB regular. Transitioning to 10TB allows me to increase capacity, reduce drive count (from 10 to 8) and lower power (the 10TB drives run a lot cooler). See signature for model.

Backup drives are WD also (WD owns HGST now) but shucked from easystore and like external enclosures. Also helium filled but less warranty (2 years) and slower speed (5400 rpm).

My pool is slow but deep. It’s config is meant to interface well with backups, where I use RAID5. So if I want to use same HDD throughout (for spares management) that means I needed an array with 5 data drives in NAS and 4 in backup (respecting no more than 80% fill rule for FreeNAS). So total of 8 drives in NAS (z3 vdev) and 5 drives in backup (RAID5)
 
Last edited:
Joined
Jul 1, 2019
Messages
8
I’d start with the RAM spec that Supermicro lists and search for the approved OEM SKUs to the letter. Either the SM SKU or the OEM SKU. But they have to match whatever you’re buying.

I had nasty POST issues with RAM sticks in my present system related to the “die generation” of the memory chips (the SKU was off by 1 digit). Now that the RAM SKU in my system exactly matches the approved list SKU in supermicros motherboard page, all is well.

So have a look at the supermicro site for your exact motherboard and go from there. I’d opt for 32GB or 64GB initially (per CPU), with a preference for larger-capacity sticks, if possible (16GB). That way, you’ll have spare RAM slots left over to fill in the future, should you need more RAM. Only go for ECC RAM.

As far as the NIC goes, I’d look at those in the recommended hardware lists (see resources), I believe they list NICs as well.

From what I’ve read here, Chelsio is very well supported. For example, my private-labeled iXSystems NIC for the mini XL was a chelsio 520. I’d go for a model that features a SFP+ port or two as that gives you the flexibility to use a optical transceiver, copper transceiver, or a twinax DAC to connect to your machine later.

I don’t know what you use as a switch but if you have SFP+ ports in a switch available, I’d start by using a twinax cable, as long as the Server and the switch are co-located. These cables are very inexpensive but you have to ensure that the hardware you’re plugging them into will accept them (some vendors use vendor-Id locks so you can only plug in “approved” stuff).

In next order of preference, I’d go optical. This avoids the uncertainty around vendor locks and hence may be your best bet given long shipping distances and minimal extra cost.

To avoid issues with vendor locks, buy a used transceiver for the right Frequency, fiber, and connector Type that matches the make of the equipment you’re plugging into. (Ie a chelsio-labeled transceiver for a chelsio NIC, a Cisco-labeled transceiver for a cisco switch, and so on)

The fiber type is the first decision to make. For small distances (<240m), 850nm OM3 Multi-Mode works great and can be used for up to 40Gbe. The connector type is almost always LC type.

Let’s assume you have a chelsio NIC in your server and a Mikrotik switch, are going for 850nm multimode.
-A compatible 10Gbe chelsio optical transceiver on eBay is about $15. (Search for 260-0012)
-a mikrotik Branded 10Gbe optical transceiver on eBay is $19 (search for “S+85DLC” with quotation marks)
-a 6m patch OM3 cable is another $8 (search for “fiber cable, OM3,LC” and buy whatever length you need) both ends need to be LC! No vendor locks apply to the fiber, thank goodness!

I’d always buy a fabricated optical patch cable, if possible. Making your own is always more expensive and you need very expensive tools and some time to get it done (been there, done that). If you have extra fiber cable, simply loop it with s generous radius. But the nice thing about the Internet is that you can buy prefabricated fiber in almost every length imaginable.

My least favorite connection type is copper due to heat and cost. If you go that route, be sure to use Cat6 or better wiring and to keep the distances as short as possible. Also buy prefab cables from quality vendors where possible. However, making your own is a lot less difficult and expensive than on the optical side.

About the NIC what are the benefits of going optical with raidz2 ?
I am assuming I won't be able to even use 2x1gig unless it's ssd.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
a single HDD can do 150MB/s which is enough to fill a 1Gbps nic with streaming workloads. So if you put more hdd's in with radiz2 you can get 400-800MB/s.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I thought you were interested in 10Gbe networking? If not, ignore that whole section for now and focus just on the server.

As your system use grows (users, throughout, etc), it likely makes sense to spend $150 on a mikrotik switch that features 24 gigabit ports and two SFP+ ports.

Attach users to the switch w/gigabit connections. Then run a 10Gbe connection between the switch and your NAS to give your data a big trunk. More users will then be able to take greater advantage of the NAS at the same time w/o slowing down due to network congestion.

A Z2 can saturate a gigabit Ethernet connection no problem. My z3 (which, all things equal, will be slower) can get to about 250MB/s for writes (large files). Add more VDEVs and your NAS write speeds will scale as well (likely not 1:1 but they will scale).

I suggested the optical route for 10Gbe because of the potential for vendor locking making a DAC cable not function as intended. That’s a minor inconvenience for me (just oder something else, have it by tomorrow) while it’s a bit of a showstopper for you.
 
Last edited:
Joined
Jul 1, 2019
Messages
8
I thought you were interested in 10Gbe networking? If not, ignore that whole section for now and focus just on the server.

As your user base grows, it likely makes sense to spend $150 on a mikrotik switch that features 24 gigabit ports and two SFP+ ports. Then run a twinax DAC cable like I did between your NAS and the switch. That gives your data a big trunk from the switch to the server.

A Z2 can saturate a gigabit Ethernet connection no problem. My z3 (which, all things equal, will be slower) can get to about 250MB/s for writes (large files). Add more VDEVs and your NAS write speeds will scale as well (likely not 1:1 but they will scale)

Yea i am interested in 10Gbe and will go with it. Someone questioned me if i will get enough with hdd.

About the switches I got 2 x cisco 550x. Can accomodate the 10Gbe with no issues.
 
Top