Need help with a future proof build.

Status
Not open for further replies.

Aristotle

Dabbler
Joined
Dec 27, 2016
Messages
21
Greetings all,

Just recently I was told that the best way was to build a NAS system is to build it from the scratch instead of buying it from the store.
And FreeNAS is the best option out there is.
I have started reading through the forum, and have gathered extremely valuable data. I am still on it, but I have a few questions I am a bit confused about. I am sure the answers for these are here on the forum, as i am reading, i am already overfull with grasping info as i read. It be great if someone can answer this so that i can start finding components and then post them later when i get to something.

I already have a 4U 24 bay supermicro chassis in my AV rack. I am using FlexRAID for pooling and parity of my disks. This is a headless server attached to my HTPC (Home Theater Personal Computer). The SuperMicro is on Windows 2011.
This server is purely a media server pool.
Now i am planning to build a NAS system for files i dont need to use with parity. Even if i loose them i am ok with it.
But i do need lot of storage. With the advent of newer higher capacity disks i have been facing issues with my current 4U SuperMicro servers as the backplanes are SAS and not SAS2. Hence i cannot attach disks 5TB and above to the server.
Currently SAS2 backplanes are over the roof expensive. I'd like to have 6-8tb red disks to the NAS setup, and lots of them, 8-10-12 depending on sata ports available on the Mobo. The idea is, that i could add more disks to the nas setup in the future.

Anyhow, as i mentioned earlier, I need to build nas mainly for storing data that i dont care to pool, hence i wont even want to add it to my 4U chassis.
So here's the question.

1. I've been reading a lot of members here using SAS2 controllers/expanders. I was under an impression that most of the current generation motherboards should be able to support higher capacity disks. So can i just go about installing a motherboard with lots of SATA ports, or is a SAS2 controller prefered?

2. Please guide me on a mobo wiht 10 or more sata ports. Based on that i will pick up a CPU in few days, probably will go the ebay way, to pick up one of the CPU from a liquidator. I realize one's gotta be very picky and choosy about the mobo.
Yes and i definately would like the IPMI on the mb. What else is important? I already have an ECC dimm on my mind, i will post about it later today.

Please suggest.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
1. I've been reading a lot of members here using SAS2 controllers/expanders. I was under an impression that most of the current generation motherboards should be able to support higher capacity disks. So can i just go about installing a motherboard with lots of SATA ports, or is a SAS2 controller prefered?
I won't speak for anyone but myself here, but my reasons for going with a SAS2 expander back plane was for getting rid of all those
SATA cables in the chassis for better air flow.
Please guide me on a mobo wiht 10 or more sata ports.
Read this for many examples of suggested motherboards and other hardware for FreeNAS!

I realize one's gotta be very picky and choosy about the mobo.
Yes and i definately would like the IPMI on the mb. What else is important?
The larger the total capacity of your volume/pool the more RAM you are going to need to have decent performance.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You mention that you don't need RAID / parity, which is rare for a NAS. But, it's still advisable
to group them into ZFS pools. A group of un-RAIDed disks in a ZFS pool is basically a stripe.
This gets you larger single file systems, because it can span disks. Higher disk throughput,
again because it spans disks.

Some things to keep in mind about ZFS striped pools
  • Any single disk loss will almost certainy take out the entire pool. And all it's data.
  • Bad disk blocks that reference data, (which would be the majority), would cause that file to be un-recoverable
  • Bad disk blocks in metadata, (directories, space maps, etc...), have 2 copies, even in a striped pool. Thus, recoverable if it's copy is still good.
  • Some redundancy can be had for data by using "copies=2", which will guarentee each copy is on a separate disk, (if more than 1 disk is available). This is on a dataset level, (aka file system), not pool level, (though it can be).
So it's a trade off between how bad it would be to restore / re-create the data on a large number
of disks in a ZFS pool. Or having a single pool per disk, then having to balance where the data
goes based on what pools have how much storage.

All in all, we don't recomend ZFS striped pools because of the higher risk of failure taking out
the entire pool.
 

Aristotle

Dabbler
Joined
Dec 27, 2016
Messages
21
You mention that you don't need RAID / parity, which is rare for a NAS. But, it's still advisable
to group them into ZFS pools. A group of un-RAIDed disks in a ZFS pool is basically a stripe.
This gets you larger single file systems, because it can span disks. Higher disk throughput,
again because it spans disks.

Some things to keep in mind about ZFS striped pools
  • Any single disk loss will almost certainy take out the entire pool. And all it's data.
  • Bad disk blocks that reference data, (which would be the majority), would cause that file to be un-recoverable
  • Bad disk blocks in metadata, (directories, space maps, etc...), have 2 copies, even in a striped pool. Thus, recoverable if it's copy is still good.
  • Some redundancy can be had for data by using "copies=2", which will guarentee each copy is on a separate disk, (if more than 1 disk is available). This is on a dataset level, (aka file system), not pool level, (though it can be).
So it's a trade off between how bad it would be to restore / re-create the data on a large number
of disks in a ZFS pool. Or having a single pool per disk, then having to balance where the data
goes based on what pools have how much storage.

All in all, we don't recomend ZFS striped pools because of the higher risk of failure taking out
the entire pool.

This explanation was a masterpiece, thanks for this valuable suggestion. I never thought about it this way, infact now i've made up my mind to definitely consider some kind of parity/raid protection.

With my FlexRaid setup, i have 1 parity disk for 9 data disks. If at any point in time, one disk goes bad, i can rebuild the data again.
Just that now FlexRaid is starting to act up, hence i am reconsidering moving out to SnapRaid for Parity and Stablebit drive pool for pooling. But anyways thats for my media server.
But i do need to build a seperate NAS system for long term seeding. You heard it right. Not sure if its ok to talk about it here, but i guess you got the idea. Yes, i also have other plans on this NAS.. it wont be good if one disk goes bad, or even a small curroption can affect the entire pool.
I've seen extremely crazy scenarios with data, drive being lost, being curropted, recyclebin issues etc.. and flexraid coming to the rescue.
But its high time, Flexraid is now not stable and there is no support from the developer.

Well, coming back to my NAS setup..
I am planning to get these DIMMS.
http://www.ebay.com/itm/Kingston-Hy...3-PC3-10600R-Reg-ECC-627812-B21-/111898275522
Mostly going with 2x16-32gb for now, and then add another 32gb next year brining it to 64gb
Let me know if these dimms are good, or if there is a better choice?
I plan to go big on a case. As in a roomy case with lots of gengle typhoon 120mm fans. Will concentrate on proper cable management, thus negating the factor of air flow.

If et all I go with these dimms, what Motherboard is recommended with 10+ sata ports? Please bear in mind, i plan to use 5TB disks and above... so the Mobo's sata port has to be compatible with these disks. IMPI will do great.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I can't speak about the hardware options as much, but here are some suggestings.

If you can, leave a spare disk bay. You can use it for backups, or disk replacements.
In the case of disk replacements, if the disk has not yet failed completely, you can
cause a replacement which in essence mirrors the failing disk, re-creating any bad
data from the RAIDed set. This is an easier re-build, (aka re-silver in ZFS terms),
than a full pull out old, re-build to new. After, you remove the failed disk, it's slot
becomes your spare disk bay.

With 5TB disks, we don't recommend single disk parity solutions, (RAID-5 or RAID-Z1).
Too much risk of a second disk failing during a re-build from a failure. You can do
some mitigation of this by using the replace in place, as described above. But, if it's
a full disk failure, all the other disks have to get hit.

If you read around on ZFS, there are some general guidelines on how many disks
to use in what type of vDev, (RAIDed stripe). Basically, don't go too wide, meaning
10 to 12 is getting too wide. Don't go too low of redundancy, (12 disk RAID-Z1 is
NOT recommended). Turn on compression, (like lz4), unless you are absolutely
sure it won't help. If the use is VM storage, use Mirror vDevs and keep the pool at
50% or so.

ZFS is a great improvement from the past. It does have a learning curve, (I learned
early on with Solaris 10 update 2, June 2006.) Today there are alternatives, though
mostly Linux only, like BTRFS, (which does not have stable RAID-5 or -6 yet). And
unRAID, SnapRAID, FlexRAID...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Decide if you want an Xeon e5 or e3 platform.

Or perhaps Xeon d.

Look at the hardware recommendations.

E3 X11 platform is limited to 4 cores and 64GB and 8 Sata ports built in.
E3 x10 platform is limited to 32GB and 6 Sata ports generally. (You add extra with a sas2 HBA)

E5 X10 platform is not limited ;), 22 cores, 1.5TB of RAM, 10 Sata ports. Etc. more $$$$

E5 x9 is getting cheap on eBay i think

E5 x8 is a power hog.

Torrents should work well in a jail/VM running on the FreeNAS host.
 
Last edited:

Aristotle

Dabbler
Joined
Dec 27, 2016
Messages
21
Decide if you want an Xeon e5 or e3 platform.

Or perhaps Xeon d.

Look at the hardware recommendations.

E3 X11 platform is limited to 4 cores and 64GB and 8 Sata ports built in.
E3 x10 platform is limited to 32GB and 6 Sata ports generally. (You add extra with a sas2 HBA)

E5 X10 platform is not limited ;), 22 cores, 1.5TB of RAM, 10 Sata ports. Etc. more $$$$

E5 x9 is getting cheap on eBay i think

E5 x8 is a power hog.

Torrents should work well in a jail/VM running on the FreeNAS host.

Amazing grace, so sweet these suggestions.
Thank you. You guys are amazing here.
I am pretty much open to investing, but doesnt mean i have a huge budget, but rather i'd go smart budget.
I dont mind picking up used/liquidated parts from ebay. I've done it in the past while building my home theater and i couldnt be happier. Each time i have a GTG at home, the friends and similar enthusiast find it amusing when they realize the minuscule budget i spent for a monster setup.

I have a full size A/V rack, and some blank spaces left over as well, i can grab a 3U supermicro chassis for this FreeNAS setup.
Reading through the forum, i have a few questions.

Already owning a full size A/V rack, with a 2 x 24 bay media server, built on FlexRaid (moving to SnapRaid soon).
This server is mainly serving the media storage needs. Locally streaming movies through Kodi, and using Plex when remote. Please dont find it amusing or dumb, but i need a bit more motivation to join this bandwagon.

1. Do i even need a NAS setup. I know its a personal question, and it depends on me. But in general, if anyone has a media server with similar setup as mine, 24bay over snapraid or flexraid, do they have a NAS setup?

2. I have a HTPC installed into the AV Rack, and it becomes hard for me to pull out the entire HTPC to replace a failing drive. Hence i was thinking to pull out the drives from the HTPC and connect it to the NAS. I can replace the HDD from a swappable supermicro/norco case atleast. Does this scenario sound feasible?

3. Other than this, why do people go with NAS? Does SnapRaid/flexraid setup for media server serve differently to how a NAS would serve the user? What else do people here do with NAS setup?

4. Ive done some reading on SAS and SAS2, can someone simply explain it to me, on why would someone go with SAS2 backplanes rather than SAS? Is it because SAS does not support higher capacity disks of 4tb and above?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
1. You need some sort of NAS. Whether that's FreeNAS, that's up to you.
2. Not really. You could, technically, do this... but you'd need to PXE boot the system and then mount everything via iSCSI. That would be very complex. Move the HTPC to somewhere you can access it - and figure out why you're having to swap drives so often (poor cooling?)
3. Plenty of things. I store media and personal files on one array, and VMware virtual machine images on the other array.
4. SAS supports a max of 2TiB. You'll need a SAS2 backplane to use larger drives.
 

Aristotle

Dabbler
Joined
Dec 27, 2016
Messages
21
I can't speak about the hardware options as much, but here are some suggestings.

If you can, leave a spare disk bay. You can use it for backups, or disk replacements.
In the case of disk replacements, if the disk has not yet failed completely, you can
cause a replacement which in essence mirrors the failing disk, re-creating any bad
data from the RAIDed set. This is an easier re-build, (aka re-silver in ZFS terms),
than a full pull out old, re-build to new. After, you remove the failed disk, it's slot
becomes your spare disk bay.

With 5TB disks, we don't recommend single disk parity solutions, (RAID-5 or RAID-Z1).
Too much risk of a second disk failing during a re-build from a failure. You can do
some mitigation of this by using the replace in place, as described above. But, if it's
a full disk failure, all the other disks have to get hit.

If you read around on ZFS, there are some general guidelines on how many disks
to use in what type of vDev, (RAIDed stripe). Basically, don't go too wide, meaning
10 to 12 is getting too wide. Don't go too low of redundancy, (12 disk RAID-Z1 is
NOT recommended). Turn on compression, (like lz4), unless you are absolutely
sure it won't help. If the use is VM storage, use Mirror vDevs and keep the pool at
50% or so.

ZFS is a great improvement from the past. It does have a learning curve, (I learned
early on with Solaris 10 update 2, June 2006.) Today there are alternatives, though
mostly Linux only, like BTRFS, (which does not have stable RAID-5 or -6 yet). And
unRAID, SnapRAID, FlexRAID...

On a few media forums such as AVS forums, many members recommended me kind of similar actions as you have done here.
They mentioned that for a media server a parity is not even important, as is just movies it's easier to re-download the movies that they lost on the disk. Instead of spending money and space for a parity disk. I was told that these days most of the disks are smart enough to notify you in advance about the disk about to fail during this time its easier to copy over the files to a spare disk. Is this the meathod you are talking about zfs resilver process? Where the disk automatically copies data over when zfs deems a disk is about to fail?

Is there a way to isolate a part of the server/bays from the Internet. Meaning I'd like about 10 bays from the 24bay to be completely disconnected from the Internet. Just in case if the server is hacked or someone gains access to the server the data in those 10 disks are inaccessible to the outside world but still can be accessed from local network?
Id also like to know if I can dedicate a partial area of the 24bay to just web hosting on Ubuntu?

You mentioned about different processor speeds... For a nas setup do we need that fast processor like the E5 X10?
Why?
 

Aristotle

Dabbler
Joined
Dec 27, 2016
Messages
21
1. You need some sort of NAS. Whether that's FreeNAS, that's up to you.
2. Not really. You could, technically, do this... but you'd need to PXE boot the system and then mount everything via iSCSI. That would be very complex. Move the HTPC to somewhere you can access it - and figure out why you're having to swap drives so often (poor cooling?)
3. Plenty of things. I store media and personal files on one array, and VMware virtual machine images on the other array.
4. SAS supports a max of 2TiB. You'll need a SAS2 backplane to use larger drives.

Thanks for your reply.
I still don't get why you need nas if you already have a media server which is on a pooling mechanism and also has some sort of parity on flexraid/snapraid?
It can store media or files on such servers right? So why nas?

The point of having an HTpc in the rack is that it's a media serving machine and is rack mounted why will you want to pull out a htpc from the av rack? The disk doesn't die that often, probably 4 or 5 years. I have disks in the same htpc running for 6 yrs now, while few Seagates died in 2 or so years. Replaced them and they are fine. Rack mechanism is fill it and forget it. If I can pull out the internal drives to a easily accessed setup like nas would have... Why not.
Do people use a web hosting setup on nas?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
On a few media forums such as AVS forums, many members recommended me kind of similar actions as you have done here.
They mentioned that for a media server a parity is not even important, as is just movies it's easier to re-download the movies that they lost on the disk. Instead of spending money and space for a parity disk. I was told that these days most of the disks are smart enough to notify you in advance about the disk about to fail during this time its easier to copy over the files to a spare disk. Is this the meathod you are talking about zfs resilver process? Where the disk automatically copies data over when zfs deems a disk is about to fail?
...
If you have copies of the movies elsewhere, then it's your call to skip the parity RAID. Just note
exactly what I said, a multi-disk striped pool that gets a completely failed disk, takes out the entire
pool. It's the way ZFS works. ALL the movies would have to be copied back. The movies would
literally be striped across all the disks available at the time it was written, which may include the
failed disk.

And NO, what I meant by ZFS disk replacement is that you manually run a command, (GUI or
CLI), to replace a failing disk. In theory you can have a hot-spare for automatic replacement. But,
I am guessing that won't work with a striped pool.

Last, it's possible to use ZFS to replace a failing disk in a striped pool. Again as I said, any files,
(movies), that are impacted by the bad blocks, (on a disk that has not failed completely), would
be lost, and have to be copied back.

From everything you have written, I would not recomend FreeNAS. It's a bit complex, (so it can
do lots of complex things), and has a longer learning curve. Further, FreeNAS is designed to be
paranoid about data. Most of us here want a trouble free NAS with GOOD data security. Thus,
we assume others want the same.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Thanks for your reply.
I still don't get why you need nas if you already have a media server which is on a pooling mechanism and also has some sort of parity on flexraid/snapraid?
It can store media or files on such servers right? So why nas?

The point of having an HTpc in the rack is that it's a media serving machine and is rack mounted why will you want to pull out a htpc from the av rack? The disk doesn't die that often, probably 4 or 5 years. I have disks in the same htpc running for 6 yrs now, while few Seagates died in 2 or so years. Replaced them and they are fine. Rack mechanism is fill it and forget it. If I can pull out the internal drives to a easily accessed setup like nas would have... Why not.
Do people use a web hosting setup on nas?

First, I think you're reading too much into the term NAS. NAS is Network Attached Storage. Whether it's a ZFS-based FreeNAS, a proprietary Dell/Synology/etc. box, or something else... if the function of the box is primarily to run a bunch of hard drives and expose that storage to a network, it's a NAS.

Many of us don't run something like SnapRaid because we find it to perform poorly and be unreliable. FreeNAS is the closest you'll get to an "enterprise-class" NAS for free, but that comes with a cost. It expects a certain class of hardware to perform reliably, and it's more of an appliance... there are some things it just doesn't do well. In an enterprise, you wouldn't have your NAS also be a media PC or, really, anything else... it would be a dedicated function.

There are many of us who run "web hosting" setups involving FreeNAS. Some people run a simple web host in a jail. Personally, I use FreeNAS as a datastore for my VMware environment... the web, mail, proxy, etc. servers all run virtualized on VMware. I also use it to host a fair amount of media (about 16TiB right now). This lets me use a simple, small PC for my home theater (Intel NUC in my case), while all of the noisy server stuff stays upstairs in the closet.

If your intention is only to store media, you want to do that in your home theater rack, and you're happy with the performance of FlexRaid/SnapRaid, then that's probably a great solution for you. I agree with @Arwen that FreeNAS may be a little much for what you want to do. It's an enterprise class solution designed for people who are paranoid about their data.
 
Status
Not open for further replies.
Top