BUILD First FreeNAS Build, Media Server with 100TB+ Storage

Status
Not open for further replies.

LFletcher

Cadet
Joined
Mar 20, 2014
Messages
2
Hi,
Another new soon to be FreeNAS convert.

I've been running hardware RAID (both 5 and 6) with Windows servers for the past 10 odd years.
Last year moved from hardware RAID over to FlexRaid, but I don't have 100% confidence in it and started to worry about things like bitrot.
Although I'm much more of a Windows person I like a lot of the ZFS features and FreeNAS seemed the most logical choice.

So I've swapped my current hardware (X9SCM, E3-1230, 16GB RAM, Areca 1882 RAID card) for the following;

Chenbro 9U 50 bay case
Supermicro X10SRL-F
E5-1620V3 (and SNK-P0048AP4 cooler)
32GB DIMM (MEM-DR432L-SL01-ER21) - I've actually got 2 of these on the way but want to test performance with one first
1 x M1015 - which needs to be flashed to IT mode
1 x Chenbro CK23601 36 Port SAS Expander
2 x Intel S3500 120GB SSD's for the mirrored OS - I wanted SATADOM's but they aren't easy to find in the UK and the SSD's were cheaper
20 x Hitachi 4TB drives
X540-T2 10GBe NIC

I have a 24 bay xcase 4U case, but I've just got a good deal on a 50 bay Chenbro case, so will be using that instead - it also gives me the expansion I was looking for.
The server is being used purely for hosting media. I have another machine for running, Plex, Sickbeard etc, so I don't plan to run any jails on this machine.

My current thinking is to setup 2 vdevs using 10 x 4TB in RAIDZ2.
Once I've filled that up (this will be filled up by migrating the data off my existing boxes), I'll add another 10 x 5TB RAIDZ2 vdev.

Important things like photos, documents etc are backed up to two other machines and crashplan.
The majority of the space will be filled up with Blu-ray and DVD rips (where the physical disk is the backup).

Therefore I have a few questions;

1) I currently have 1 M1015 and 1 Chenbro expander, but in order to connect all of the bays up I'll need at least one other expander. The Chenbro ones appear impossible to find at the moment. Therefore I appear to have 2 options;

6 x M1015 HBA's
1 x M1015 HBA and 2 x HP SAS Expander

I haven't seen a great deal on this forum with regards to using the HP expander and compatibility (aside from the suggestion that the Chenbro has an LSI chipset and is therefore better).
Can I use the HP expander or is it likely to lead to issues?
Are the 6 HBA's a better option?

2) Buying another server to backup the Blu-ray rips isn't going to be cost effective, but ideally I really won't want to rip them all over again. How safe does the above config look? In reality would it benefit much by using RAIDZ3 instead (that would mess up my 5 vdev plan in a 50 drive case)? I recognise it's a how long is a piece of string question, but I am looking more for confidence levels.

I don't see it mentioned very often but would it be wise to have multiple pools to remove the risk of a catastrophic failure, rather than just a single one with multiple vdevs?

3) How hard and fast is the 80%/90% full rule for vdev's before performance starts to suffer? It seems an awful lot of space to keep free (20TB in a 100TB machine). Does performance drop off a cliff or is it a gradual decline?
Reading through the forum I've seen mention of not getting close to 95% due to not being able to allocate space in order to delete data.

4) Is the plan to start with 2 vdevs and then add new ones when the space is required the best way to go? I would imagine following this path the server would ultimately end up with storage requirments closer to or above 200TB.

5) Is there a preferred method backing up between FreeNAS and Windows machines? I've seen people mention BTSync, rSync, Robocopy on the forum, depending on the direction of the copy.

6) I believe FreeNAS is hardware independent, so if I changed motherboard (if it failed) this wouldn't be an issue?
Similarly, if I start with a 24 bay case, get things up and running, then move to a 50 bay case, is there anything I would need to do? Move drives in a specific order, i.e. they would need to be in the same drive slots when moved?

7) Outside of running regular SMART diagnostics (notified via email) and scrubs, is there any other routine maintenance that should be setup?

8) I need to do more research on this but I believe bad blocks in the norm for drive testing before utilising in production. The first batch I can do before I start using the server, but once it's up and running, how do people then test more drives? Do you use another machine?

Thanks in advance.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I can't address all of your questions, but here's a few I can answer:

4. That's a good plan, and one of the features of ZFS--it's easy to expand your pool by adding new vdevs as needed.

5. I use Urbackup to back up Windows clients to my FreeNAS server (it's installed in a jail), and Crashplan to back up selected data from my FreeNAS server to Crashplan Central.

6. Correct. The only thing you may need to do is reconfigure your network settings, if the new motherboard uses a different driver. You'd want to keep track of where drives are (so you know which one(s) to replace in the event of failure), but FreeNAS figures it out on its own.

7. Scheduled config database backups are a good idea, though 9.3 already keeps backups when the config changes. Otherwise, assuming you mean SMART self-tests by "SMART diagnostics", I don't think of anything obvious.

8. As long as there's room in the FreeNAS server for the new disks, you can just install them and run badblocks while your server is running. Obviously you'd do that before adding them to the pool. You can even use tmux to run several instances of badblocks simultaneously, and detach that session and let it run in the background. That's what I did when I added the most recent 6 disks to my server.

Be sure when you flash your M1015 to IT mode that you use the P16 firmware.
 

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
Agree that it looks like a good plan overall and I just went through a very similar exercise.

4. This is exactly what I did, although after I added the 4th and 5th vdev, I created a new datastore and all my data from the original datastore over to it. I did this to ensure the data was spread across all spindles for max performance. Probably not needed, especially with mainly media.

8. I only did this during the initial installation. Once I had production data on the server, I moved my testing to another machine. The reason I did so was because of this remark in the excellent [How to] hard drive burn in thread:

Now, before we can perform raw disk I/O, we need to enable the kernel geometry debug flags. This carries some inherent risk, and should probably not be done on a production system
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I will warn you that every system I have seen that had a zpool mounted and had the geom debug flags set resulted in a kernel panic on the server from a few seconds to a few minutes later. So this is not something you should ever do on a production machine. ;)
 

LFletcher

Cadet
Joined
Mar 20, 2014
Messages
2
Thanks for all the responses so far.

I need to do some more reading - especially regarding the hard drive burn in.
As I should do it on a different machine (for future drives), is there any reason why I couldn't do it on a VM?

Point 1 of my original questions is still causing me issues. It appears I have found somewhere which still has the chenbro 32 port SAS expanders, so does anyone have any input on whether to go HBA only or use expanders?

As they would cost about the same it's a choice between;

6 x M1015 (this would use all the pci slots on the board once the 10GBe NIC is added)
1 x M1015 + 2 x Chenbro 32 port SAS expanders (or potentially 2 x M1015 + 2 x Chenbro to give each expander it's own HBA)

opinions?
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
I'm a little confused on the moving from one enclosure to another. If you have 2 enclosures and using SAS just run a JBOD enclosure with a single/double SAS cable from one to the other. That way your current setup will work as it always has, and you just kept adding more vdevs to your pool. Read the manual on your current SAS back plane if it's not already a parallel back plane.

Also on point 1. I like less wiring. Using a SAS back plane is the way to go. You also have to think about all the extra heat generated and power used with all those HBA's. Now you only have a few cables going to your back plane to run your 50 drives. You just have to decided do you want a cascade system is all. From what I've read it's not really an issue on a FreeNAS system with a 1gbit NIC. Maybe with a 10gbit NIC you won't be able to saturate your link but how fast do you need data if all of your data is media? If you are using SAS2 you might have issues saturating that link on a single connection but aggraget data might be able to off multiple links. I don't have personal experience just regurgitating what I've been reading over the past week.

And the advise in the manual and from various people on the forums here is to use small vdevs for better performance. So running 5 disk RAIDv1 will be more responsive and easier to manage. From the looks of it you don't even want to think about running a mirrored array since you are looking at 10 disk RAIDz2 vdevs. Also since you have back-ups the difference between the 2 may be moot, but I wonder how long a resilver will take on a 4TB drive in a 10 disk RAIDz2? If performance and data integrity is important than a bunch of 2 disk mirrored vdevs are the only way to go. Also far easier to manage and to kept everything simple you can add more drives 2 at a time instead of 10.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Regarding OP question #1, I think the general sense is to use expanders. It's easier and has less cabling. The reason you might not be getting responds, is that there aren't a lot (any?) of folks using that case and/or expander, so it's unknown whether or not it will work easily.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
I assumed that because of the use of 5x HBAs and cabling that he was talking about using a parallel setup. I think I'm using the correct terminology ie. each drive has it's own cable. Need the OP to confirm what he was indeed intending.
 
Status
Not open for further replies.
Top