First FreeNAS Build

Status
Not open for further replies.
Joined
Feb 2, 2016
Messages
574
It sounds like you're just looking for a lot of big, dumb storage and don't need a lot of on-server processing. That's fantastic because it means you can go cheap on the hardware. Your bottleneck will be your internet pipe not your NAS. As long as you can saturate the pipe between your NAS and desktop, you'll be in great shape.

Here is a 4U Supermicro 24 Bay JBOD HBA Storage Server X8DTN+ 2x Xeon 6 Core 2.4Ghz 144GB RAM with a 'Buy it Now' price of $730. That gives you two adequate processors, lots of ECC RAM and ample drive bays. Buy two HBAs of your choosing (IBM ServeRaid M1015 cards are popular) and cables: $300. Pop in six, 6TB drives for $1,404 ($234 each) and configure as RAIDZ2 for 24TB of usable disk space. That comes to $2,434.

Copy the 20TB of data you already have to the NAS. Make sure it meets your requirements and you're comfortable with the platform.

You mentioned that you have 4GB and 5GB external drives? If you're feeling aggressive, shuck them to get the bare drives. Add these drives to the chassis and configure as reasonable. If you have too few of one size to create a RAIDZ2 group, buy enough size-matched drives to create a six drive grouping.

With 6TB drives in RAIDZ2 configuration, you can get 96 TB of storage in this 24-bay server.

Cheers,
Matt
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Correct. I remember hearing there's some movement among the OpenZFS devs to implement vdev removal, but it certainly isn't implemented in FreeNAS at this time.
Very limited. As in "only works with mirror vdevs". And I'm not sure it's close to reliable, either.
 

Kenfolk

Explorer
Joined
Sep 4, 2016
Messages
51
This may be a stupid question, but if I were to only get 8 drives, would I really need 2 HBAs? To expand, If I did get 10 drives, would it be smarter to go put all 8 on one HBA, and 2 on the other, or split it up 5 and 5? Lastly, would I later be able to create another vdev and add drives to an HBA that already has either 2 or 5 drives on it?

Depending on the answers to the above question, I'm debating either getting 8 or 10 6TB drives, which should give me enough storage that I shouldn't have to worry about creating more space until drives drop a bunch in price.
 

VladTepes

Patron
Joined
May 18, 2016
Messages
287
Cheapest solution is perhaps watch less TV and get out in the sunshine....




OK I'll crawl back under my rock now....
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
This may be a stupid question, but if I were to only get 8 drives, would I really need 2 HBAs? To expand, If I did get 10 drives, would it be smarter to go put all 8 on one HBA, and 2 on the other, or split it up 5 and 5? Lastly, would I later be able to create another vdev and add drives to an HBA that already has either 2 or 5 drives on it?

Depending on the answers to the above question, I'm debating either getting 8 or 10 6TB drives, which should give me enough storage that I shouldn't have to worry about creating more space until drives drop a bunch in price.
Any remotely recommended system is going to have six SATA ports or more on the motherboard. With an HBA, that totals 14 ports available for direct attach. Essentially unlimited, with an expander backplane.
 

tfast500

Explorer
Joined
Feb 9, 2015
Messages
77
Also it does not matter in what order u connect the drives they can be on any hba motherboard combination.

Sent from my Nexus 6P using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
FWIW,

I'd suggest getting a 24 bay rack case. Start with an 8 drive RAIDZ2 of either 4TB or 5TB, depending on which you prefer.

When you need more space add another 8 drive Z2 vdev, perhaps 5 or 6TB, and when you want more than that, add another 8 drive vdev.

When you want more than that, replace the first vdev's drives with bigger drives.

Now, for a potential 24 bay system, you'll want a 1000W PSU.

For a big 48TB+ system you'll want 64GB of ram. at least eventually.

IF you go with a Skylake X11 system, you can get a board which supports 8 SATA ports. Depending on the chassis/backplane, you can then get away without an HBA for you first vdev. And maybe even only 32GB of RAM. Ie 2x16GB

WHEN you want to add the next vdev, then you add the HBA and an extra 32GB of RAM. Each HBA (assuming an 8i) will get your an extra 8 SAS lanes, without using a SAS expander.

WHEN you add the next vdev, again, add an extra HBA.

Now, the only question is, do you go with a Skylake Xeon E3 system, or do you step up to an E5-1600v4 system. The E5 will mean you can get 6 or 8 cores, and support 256GB+ of RAM. And is probably overkill, since you aren't interested in running plex etc.

I suspect that you'll want to stick with the Skylake and its 64GB of RAM limitation, but its worth thinking about.

BTW, this is a good board from Supermicro which includes a SAS3 HBA. http://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSL-CF.cfm
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You can't go directly from the HBA to the drives; you'll have to go HBA -> backplane. You'll need a total of three 9211-8i cards (or equivalents), and six mini-SAS cables, to wire that up.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Or a sas expander. Possible for it to work out cheaper than 2 more HBAs
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Or a sas expander. Possible for it to work out cheaper than 2 more HBAs

Here's an example
http://www.intel.com.au/content/www/au/en/servers/raid/raid-controller-res2sv240.html

You run one mini sas cable from HBA to the expander. Then you run 5 breakout cables to 20 drives.

Then you run another breakout cable from the HBA to the remaining 4 drives.

If you only had 16 drives you could run both HBA ports to the sas expander for double the bandwidth to each drive Ie 3gbps. As it is, you'll have a small 1.2gbps bottleneck to the 20 drives.

If your mobo had 8 Sata ports, which it doesn't, you could've combined those with the 16 off the sas expander to get 24.

Or you could leave it for another day. You'll have 14 ports with the mobo and HBA for the moment.
 

Kenfolk

Explorer
Joined
Sep 4, 2016
Messages
51
To start I'm only going to have 8 drives, with an added vdev later on down the line when the time comes. Just the one HBA should take care of those 8 drives if my math adds up. Would I still see that bottleneck of 1.2gbps with just the 1 HBA?

Probably another stupid question, but I want to make sure I'm correct, on how the backplane works. Do the harddrives just sit on ports of the backplane, while there is the breakout cable from the HBA to the ports used on the backplane?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Do the harddrives just sit on ports of the backplane, while there is the breakout cable from the HBA to the ports used on the backplane?
You would not use a breakout cable with the backplane you have; that backplane in effect integrates a breakout cable--it has one mini-SAS port for each four bays. You'll use mini-SAS cables from the HBA(s) to the backplane.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
To start I'm only going to have 8 drives, with an added vdev later on down the line when the time comes. Just the one HBA should take care of those 8 drives if my math adds up. Would I still see that bottleneck of 1.2gbps with just the 1 HBA?

Nope, 6gbps to each drive (more than enough), bottleneck would actually be the drives.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You would not use a breakout cable with the backplane you have; that backplane in effect integrates a breakout cable--it has one mini-SAS port for each four bays. You'll use mini-SAS cables from the HBA(s) to the backplane.

You sure about that? I looked up the manual and it clearly showed one sas/sata port per drive

I was confused. It was showing the sas/Sata drive headers. The ipass connectors are the minisas connectors
 
Status
Not open for further replies.
Top