Simple Video Backup Server

Status
Not open for further replies.

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
Hi all,

I am in need of a simple backup server and looking for some guidance from experienced folks!

What's the server's main use?
Basically to efficiently, eloquently and safely backup all of my video data.

Specific requirements?
The number one is redundancy. Second one is scaling: the ability to allow for the system to organically grow/expand during years of film making. It won't be running 24/7 - I'll only use it to backup my project folders containing all the media files at the end of the day. I can do this manually: just copy the folders that were used in the day's work and write it to the backup server. In some cases this means simply updating already existing folders on the backup sever.

Another requirement is reliable disk health feedback to avoid actual drive failure so that I can prevent rather than repair.

Speed?
Speed is not a priority even though it is certainly a feature that is welcome - if possible. All of my footage is 4K/UHD 10Bit 444. However - redundancy and reliably is more important than speed. I can schedule my backups to execute during the night or whatever.

Version 1:
I would like to start with a system that sports 16 - 20 TB of actual space (parity disks not included). It should also probably have the ability to grow double, triple or even quadruple in size during the following 3-5 years.

Network:
My current workstation has a single 1GbE connection which is currently in use for my internet connection. When backing up to the future server, I could unplug the internet and plug in the server although it would be nice if I wouldn't have to do that. Is it an option to connect the backup server to the router, or is that bad practise. Would I have to get a switch or maybe expand the workstation with a 10GbE card? Or maybe I could just build a physical ethernet switch that I can flick when I want to backup. Suggestions are welcome!

Well, thank you for reading! Feel free to jump in and throw some thoughts/considerations/best-practise-related info at me!

Thanks!

 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Another requirement is reliable disk health feedback to avoid actual drive failure so that I can prevent rather than repair.
The disk testing / monitoring that FreeNAS (or any system) does would require the disks to be on. The most effective test, the 'SMART Long' test, takes as much as 14 hours to run, even on a 2TB drive and the duration of the test increases with the size of the drive. Also, the most stressful time is the start of a disk drive. If you want a drive to last a long time, the best situation is to have it run constantly. This allows all the components to reach a thermal equilibrium where they are designed to operate.
It won't be running 24/7
It would be better for the longevity of the equipment if it did run all the time. You can schedule the diagnostics to run late at night or early in the morning so they do not interfere with your use of the system.
I would like to start with a system that sports 16 - 20 TB of actual space (parity disks not included).
I would suggest a RAIDz2 array consisting of 6 disks of 8TB each. This would give you a usable space of 22TB. You can later expand this storage by adding an additional 6 drive vDev (virtual device) to the existing storage pool. This would expand the existing storage without requiring any change other than a few clicks in the GUI.
It should also probably have the ability to grow double, triple or even quadruple in size during the following 3-5 years.
The expansion of the pool is only limited by the size of the drive and total number of drives. I have a single server at my work with 80 drives attached using SAS expansion enclosures. For best function of the system, each vDev should be of the same number of disks but additional vDevs do not need to use the same model disk as previous vDevs in the system.
Is it an option to connect the backup server to the router
If your router has multiple ports on the "inside" that are intended for computers to connect to, connecting the NAS to one of them would allow it to reach the internet to receive updates and it should also allow the NAS to communicate with your computer.
Would I have to get a switch or maybe expand the workstation with a 10GbE card?
You may wish to get a switch, but that is probably not needed. It is also possible to direct connect between your desktop and then NAS using a 10Gb network interface.
Here is a video describing that situation:
https://www.youtube.com/watch?v=MgNpI6VAAhI&t
Or maybe I could just build a physical ethernet switch that I can flick when I want to backup. Suggestions are welcome!
No.

Some links to learn about how it all works:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

Proper Power Supply Sizing Guidance
https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/

Don't be afraid to be SAS-sy
https://forums.freenas.org/index.php?resources/don't-be-afraid-to-be-sas-sy.48/

I am in need of a simple backup server and looking for some guidance from experienced folks!
If you want specific build advice / list of hardware to buy, please post back asking for more help.
 

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
Thank you, excellent advice!

If I let the system run 24/7 I suppose it is important to keep the drives' temperature stable. I suppose FreeNAS config can manage that?

Would it also be possible to host a website on the same system? Would save me some hosting costs for my website as well as maybe serve as an FTP server to send data to my clients? My router has a second 1GbE connection so I guess it might be possible to chain it all together.

I currently have 8x individual, identical 3TB Seagate drives with my data on. They are very young, haven't many hours on them. Suppose I could re-purpose them and maybe buy another 4 of the same ones?

Feel free to suggest some hardware if you'd like? This is completely new territory for me.

I suppose I need a MOBO with sufficient lanes and one HBA card per vDev? I don't need caching and all the fancy media server stuff. All I care about is stability, redundancy and future scaling option.

The future SAS expansion enclosures, they require their own power supply? Or is it easier to run the whole system off of one source?

I'm thinking of housing the server in its own enclosure and use one enclosure per vDev? So I guess power/fan cables need to link between enclosures as well as the SAS multi cables?

Thanks for the help so far!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If I let the system run 24/7 I suppose it is important to keep the drives' temperature stable. I suppose FreeNAS config can manage that?
We have some users in parts of the world where there is no air conditioning and the heat the system is exposed to is in excess of the drives rated operating temperature. This causes failure of the drives much more frequently than would normally be the case. FreeNAS can't regulate the environment. You need to keep the system in a place where the ambient temperature is around 23°C. If the air temperature is too high, the system can't cool. It will also need fans to draw the cool air over the drives.
Would it also be possible to host a website on the same system?
Many people do. You would run it inside something called a "jail" to keep it separated from the main system.
Would save me some hosting costs for my website as well as maybe serve as an FTP server to send data to my clients?
Also possible, but you need to be careful about exposing these things to the internet. Ensure that all security precautions are taken so your system does not get hacked.
I currently have 8x individual, identical 3TB Seagate drives with my data on. They are very young, haven't many hours on them. Suppose I could re-purpose them and maybe buy another 4 of the same ones?
When drives are added into a ZFS storage pool, any data already on the drive is deleted as the drive is repartitioned.
I suppose I need a MOBO with sufficient lanes and one HBA card per vDev?
One SAS controller is enough to run 256 hard drives. You just need the right combination of SAS expanders and cables.
The future SAS expansion enclosures, they require their own power supply?
Yes
Or is it easier to run the whole system off of one source?
The more drives, the bigger the power source. I have a server at work that has a 2000 watt power supply but it has 60 hard drives all in one enclosure.
I'm thinking of housing the server in its own enclosure and use one enclosure per vDev? So I guess power/fan cables need to link between enclosures as well as the SAS multi cables?
Cost per drive bay is significantly less if you use a single enclosure such as this:
https://www.ebay.com/itm/Supermicro...-Quad-Core-Xeon-72gb-DDR3-DVD-RW/132691255481
 

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
I suppose I could install the server in the basement, it has stable and cold temperatures all year 'round. It's also close to where my internet enters the house so it's a good excuse to finally pull a cat6 cable through the house and install the router in the basement as well.

Is there a SAS controller you could recommend for my needs?

Single, big enclosure sounds logical!
 
Joined
Sep 13, 2014
Messages
149
Hi @woods , I just wanted to expand on a couple of things Chris said.

The disk testing / monitoring that FreeNAS (or any system) does would require the disks to be on.

I believe Chris is referring to what are called "scrubs". The way ZFS protects against bit rot, bit flips and other forms of corruption is by "scrubbing" the data. Basically calculating a checksum from the data being scrubbed and then comparing it against the checksum created when the data was originally created. It's vital that scrubs are performed, which is one reason why it's best practice to leave the server on.

In my sig, there's a button labelled "Useful links for new users". Link 4 is a tutorial that explains (amongst other things) how to schedule Scrubs and SMART tests.

If I let the system run 24/7 I suppose it is important to keep the drives' temperature stable. I suppose FreeNAS config can manage that?

Just a heads up... most people (myself included until a few years ago) overestimate just how cool a HDD should be. Just over a decade ago, Google conducted a case study on HDD failure rates and it's correlation with environmental variables. They found that failure rates for HDDs increase when their temperature is below 30c and above 45c. So it's best to have the drives a little warmer than you might think. For that reason, if you end up housing the server in your basement, make sure it's not too cold down there.


I currently have 8x individual, identical 3TB Seagate drives with my data on. They are very young, haven't many hours on them. Suppose I could re-purpose them and maybe buy another 4 of the same ones?

HDDs failure rates follow a bathtub curve, i.e. they're most likely to fail when they are new or old. So disks being new is no guarantee that they're reliable. That's why I recommend testing them before actually trusting your data to them (see link number 3). As for you choice of disks, buying new ones vs re-purposing your Seagate's, as Chris points out, when you add the disks to a FreeNAS system, they would have to be wiped. So I'm guessing you'll end up buying new disks, which is just as well because if the Seagates you have are 7200.14 Barracudas, then they have a terrible reputation for high failure rates.

Money permitting, I think your best course of action is to buy new disks and use the 3TB's (assuming they're the infamous Barracudas) as a backup... which brings me to an important point. You really should backup your data if your livelihood depends on it.
 
Last edited:

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
Thanks! So perhaps it's better not to install it in the basement.. in winter it can drop a bit in temperature.. on the other hand, spinning disks create heat so if the fans are properly managed it could work? Better to have room temperature a little lower than too high right? In summer in the house it can get up to 25-30°C or more although no more than 2 months.
The basement in winter, never really drops below 10°C I'd say.

Ideal operating temp is said to be 37° - 46°C - so maybe safer to set it up in the house and make sure the case has optimal airflow and an ample amount of fans?

The Seagate disks are 3TB BarraCuda ST3000DM008

So which disks are recommended? More disks, smaller size or fewer disks larger size? Some server optimized disks (power consumption/reliability?)

The most critical of data could be double/triple backed up, you're right.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The basement in winter, never really drops below 10°C I'd say.
First question about the basement, is it dry? Too much humidity can be bad. With regard to the temperature, that should not be cold enough to be too cold. We keep the server room at work set at 20°C and the coldest drives are around 28, but the warmest can be as hot as 48, so you need to keep a watch on the reports. Also, it it isn't all the time, it is probably not a terrible situation.
Here are some good monitoring scripts that you can run to check up on the system:

Github repository for FreeNAS scripts, including disk burnin
https://forums.freenas.org/index.ph...for-freenas-scripts-including-disk-burnin.28/

In summer in the house it can get up to 25-30°C or more although no more than 2 months.
I keep my house down to about 26°C in the summer with some limited use of air conditioning. The outside temperatures, like today, are often around 40°C in summer and I can't tolerate that kind of temperature in the house; I think I would die. With my house at 26°C though, the drives in the front of my server are between 31 and 33°C, but the ones in the back of my server are running between 38 and 41°C, so I could probably let my house be a little hotter without the drives having any trouble, especially the ones in the front, but not too much hotter.

Drives running hot doesn't make them die immediately, the working theory is that if they get hot, it shortens their working lifespan and the hotter they get, the shorter their life. Which circles back to the forum user I was talking to that lives in a desert region with no air conditioning. It is common for their drives to run between 65 and 70°C even with their server in the coolest room of the house. They are using Western Digital Red drives and having to replace the drives after 1 to 1.5 years.

The WD Red drives are rated to operate between 0-65°C but I am sure that at the top of that range you are going to see some premature failure.
https://www.wdc.com/content/dam/wdc/website/downloadable_assets/eng/spec_data_sheet/2879-800002.pdf
That is just one example. Other drives have different rated operating temperatures and your mileage may vary.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Ideal operating temp is said to be 37° - 46°C -
I don't really agree with that. Those companies are looking at the cost of replacing drives verses the cost of paying for air-conditioning to keep the data-center cool. They are considering cost on a scale that we will never be concerned with at home or even in a small business. Where I work, we keep our drives cooler than that because we want them to last longer and it is not unusual for us to run servers between 6 to 10 years before they get replaced and many of them still running the original drives. I recently decommissioned a server that had run for 12 years and still had all the original drives. Hot drives do not last as long. To me, the basement is a better solution than letting the drives run hot.

So which disks are recommended? More disks, smaller size or fewer disks larger size? Some server optimized disks (power consumption/reliability?)
I suppose that depends on if you are going to go out and buy new dives, and how much storage you want to start with. We have some tools to help with your research:

Disk Price/Performance Analysis Buying Information
https://forums.freenas.org/index.ph...e-performance-analysis-buying-information.62/

ZFS Drive Size and Cost Comparison Spreadsheet
https://forums.freenas.org/index.php?threads/zfs-drive-size-and-cost-comparison-spreadsheet.38092/
 

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
Alright, thanks again Chris! I'll try to do a humidity reading in the basement and report back. I'll just add that 10 months of the year, the temperature in house is 20-21°C and colder - humidity is pretty much solid 60% all year 'round except for a couple of weeks in summer when it gets really hot and humid once in a while. As far as the basement is concerned, I don't think it is actually humid but I'll have to do those readings to really know for sure.

As far as hardware goes, is it interesting for me to look for second hand systems?

What do you think about this, as an example: https://www.benl.ebay.be/itm/HP-Pro...842324?hash=item1cb1ba0454:g:SMsAAOSwcVZbTcql

I could add an LSI HBA card?

I'll start with 6x 8TB drives and have the option of doubling it in the future.
 
Joined
Dec 29, 2014
Messages
1,135
I am less familiar with the G8's, but the description makes that unit sounds like it doesn't have a SAS expander in it. That does seem odd for that kind of unit. 8GB of RAM is bare bones, so I would definitely suggest going to 16 or 32. The LSI cards sounds good, but you might have might need to inspect it to see what kind of cabling you need. If I had to guess, it probably had an HP RAID card in it that got pulled.
 

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
Adding another 8GB of RAM shouldn't be a problem! What about the CPU, good enough?

Quick noob question: how does a SAS expander fit into the system chain? If I had to guess: the hard drives connect to the expander and the expander connects to the HBA?

So if I would take 6 x 8TB drives drives I could attach them directly to the HBA in phase one.
Phase two would be the addition of a second vDev: again 6 x 8TB drives and also a SAS expansion card to chain it all to the HBA?

Do I have take lane speeds, processing power (and stuff like that) into account when it pertains to these drive health/check scripts/scrubs etc? For simple data transport it doesn't really matter because the bottle neck is the 1GigabitE port.. but for internal "scrubbing"/etc... things might be different?

Ideally I want a complete system scan - daily within an 8 hour window at night so that I have a health report every morning.
 
Joined
Dec 29, 2014
Messages
1,135
The CPU seems fine to me. The SAS expander is what allows you to connect to multiple drives without running an individual cable to each drive. This may not be the most technical description, but you could think of it kind like a hydra cable. You can do one as a separate card too like this.
s-l1600.jpg


So you have 2 inputs and 4 outputs to have access to more drives without direct cables. A system like that with a lot of drive bays would normally (in my experience) have some kind of backplane in it that acts as a SAS expander. Something like this.
s-l1600.jpg


Regarding the rest, for a NAS (IMHO) good drive controller, drives, DRAM, and NIC are the things that will get you the most bang for your buck. Yes, you can cripple it if you install it in an 8 bit Z80 system (Methuselah alert there) but something of the vintage in that system should be fine. As long as you have a good 1G NIC, that is a good starting place. If you get to where you are saturating that all the time then you can look at options. Those are likely Broadcomm NIC's which should work fine.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
As far as hardware goes, is it interesting for me to look for second hand systems?

What do you think about this, as an example: https://www.benl.ebay.be/itm/HP-Pro...842324?hash=item1cb1ba0454:g:SMsAAOSwcVZbTcql
That could be a good starter system. You could use the two drive bays in the rear for a mirrored pair of small drives to be the boot volume (connected to the SATA controller) and use the bays in the front for the storage disks. Later, if you need the capacity, you can always add an external disk chassis to go beyond 12 data drives.
I could add an LSI HBA card?
An excelent card for the disk controller would be this:
https://www.benl.ebay.be/itm/HP-H22...0-IT-Mode-for-ZFS-FreeNAS-unRAID/162862201664
What about the CPU, good enough?
I would say 2GHz is a little slow, but it isn't difficult to upgrade if you need to and this system gets you started without a huge initial cost.
Quick noob question: how does a SAS expander fit into the system chain? If I had to guess: the hard drives connect to the expander and the expander connects to the HBA?
Exactly.
Ideally I want a complete system scan - daily within an 8 hour window at night so that I have a health report every morning.
I do drive scans using the SMART tools to check drive health daily, but the scrub of the pool is only once a every 3 weeks for me. I have found that with healthy drives, the pool doesn't tend to have errors, so I replace a drive at the first sign of a problem.
 

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
The case I linked to, it does have a sas expander? When I google pics of the model I can see a PCB that looks much like a SAS expander or at least some sort of connectivity interface for the bays in the front.

The reason you linked to that HP disk controller is because it will probably be a good match in terms of cabling? Maybe other forms of hardware compatibility with the system as well?

Oh right, once you're starting to get error messages, the drive isn't completely failing yet. It usually takes some time before it becomes unusable; probably a sufficient window to spot the error replace it without having to scrub daily like you say. Moreover, I trust that in your experience drive health errors show up before actual data errors. The scrub simply acts as a secondary safety precaution as far as I understand it now.

Well, I think I just ought to buy this system and get it going. No better way to learn by getting those hands dirty :)
I'm leaving on holiday in 10 days and it says shipping could take up to 11 days. They still seems to have 7 units left so I might take the risk and wait until I get back. I'll try contacting them and see if we can work something out.

EDIT:
They could deliver it in 5 working days so I went ahead and just bought it.

The HP card: is that ebay seller some sort of an affiliate from you, Chris? If not I'm going to try and find a seller in Europe to reduce tax/shipping costs a bit.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
No, I don't have a connection to the product. It is the type of controller that I am suggesting.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yes, except that it is probably still using the HP firmware. The one I pointed out has been flashed with the firmware from the company that made the controller chip to change the card to a different mode of operation called, 'Initiator Target' or IT mode. That is a recommendation for problem free operation in FreeNAS and any system that uses ZFS because ZFS needs direct access to the drives. It may present you a little difficulty to change the firmware, but it is possible to do, and these sellers have done that for you and are charging a little extra for that effort. Like this one:
https://www.benl.ebay.be/itm/HBA-HP...TA-IT-Mode-FreeNas-Avago-9205-8i/132268662579
The HP cards are identical to the LSI cards, just with the HP firmware, which would need to be erased and the LSI firmware programmed instead.

Here is a guide if you want to read about the process:

Detailed newcomers' guide to crossflashing LSI 9211 HBA and variants
https://forums.freenas.org/index.ph...o-crossflashing-lsi-9211-hba-and-variants.54/
 

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
aha okay, makes sense! I'm ordering the one you've linked. Thanks!
 

woods

Dabbler
Joined
Jul 27, 2018
Messages
45
UPDATE:

server and card arrived.

RAID on my workstation is failing I think. Over the past few days/weeks it's been extremely hot and every day I've noticed increased performance drops. Today R/W speeds are non workable. Surely those are signs of failure right?
 
Status
Not open for further replies.
Top