BUILD New home NAS

Status
Not open for further replies.
Joined
Jan 28, 2016
Messages
3
I am a first time poster but I have been a long time reader of the forum. I have been using redundant storage for my homemade photo's and video's with separate backups since the beginning. I started out with a Drobo, attracted to the redundancy and ability of using different sized disks in an automated system connected through FireWire to a Q6600 Hackintosh and later with real Macs. My experience with FreeNAS began in 2009-2010 when the data on the Drobo had become strangely corrupted. Further reading led me to distrust the Drobo so I sold it and repurposed the Hackingtosh to a FreeNAS fileserver since I had bought a iMac as a workstation.

Specs of the Hackingtosh server:
Gigabyte GB-EP45-DS3L
Q6600
8GB DDR2
5 Samsung Spinpoint F1 1GB
Antec 550W PSU

I use the server relatively rarely so most of the time it is off. I only turn it on if I need access to those particular files. It contains photo's and video's I've made (inreplacable), my archive of files gathered over the years that I rarely use anymore but which I don't want to delete (inreplacable), downloaded films (replacable) and my download archive of applications and ISO's (hard to replace but no disaster if lost).

With the exception of a USB bootdrive failure on two separate occasions, the server had been happily working with my pool (named 'Atlas') in a RAID Z2 configuration until at some point I began getting storage warnings. I can't recall exactly which, yellow warnings that led me to decide to buy new drives since I wanted to add more capacity anyway. I bought six new 2TB WD Red to upgrade. Because I used only 5 of the 6 SATA ports on the motherboard I could elegantly replace the Samsung disks one by one without ever being in a degraded state. The sixth WD drive is a cold spare. I've kept the Samsung disks and have been extensively testing them the last couple of days concluding them to be healthy disks.

With the RAID Z2 setup and the SMART/SCRUB/mail settings Cyberjock advised in an other post the system continued to run smoothly and quietly. I make manual backups to an USB 4TB disk that is mostly at my parents' house. A couple of weeks ago I began hearing repeated clicking from all the drives simultaneously at 2-3 second intervals. Extensive SMART testing showed no problems at al. But since I now have had recurring problems with this motherboard/cpu/memory configuration (1. the Drobo corrupting, 2. the hackingtosh installation becoming corrupt, 3. the yellow storage warnings, 4. on two occasions failing USB bootdrives, 5. now the clicking) I decided to build a new server and do it right. I have been reading up a lot on the forum including the presentations and posts of Cyberjock.

The first thing was ECC and I started looking for an affordable motherboard/cpu/memory combination. Buying new turned out to be way to expensive for me at the moment. So I got a combination of second hand and new hardware. I was able to find a Xeon server platform through a local tech forum that had been running well and was decommissioned due to age and I was able to get a great deal.

My current hardware:
Supermicro X7DBE
Dual Xeon 5150
32GB Hynix FB-DIMM ECC DDR2-667 (on the approved memory list)
Arcea ARC-1210 (4 port SATA in JBOD mode)
Antec 550W PSU
Fractal Design case with adequate cooling (drivetemps <30 degrees)
Two Lacie 16GB bootdrives (mirrored)
Powerusage 66W while writing 200MB/s from the network at 40% CPU load

So far so good right? I will be adding a UPS when the system becomes live. Currently I am testing the server with different operating systems just for LOL's. Within a Windows 2012 R2 environment I have been testing the five Samsung disks with a handful of different harddisk testing tools to see if they are healthy enough to be deployed. And those results lead me to conclude the disks are in good shape, also making me wonder why I was getting the warnings back then but then I wasn't so well versed in FreeNAS as I am now.

So with the server grade platform I have a large amount of disks available to build a new pool. For now my data resides on the Atlas pool with a full backup on a 4TB external USB drive. The plan is to use as much of all my available drives.

RAIDtables.png


I plan to use two different vdevs to be able to combine the different disk sizes. In option 1 four of the five Samsung disks will be connected to the ARC-1210 and the last Samsung disk on the remaining SATA port on the motherboard. The life expectancy of the Samsung drives is shorter than the WD drives.

Option 1 has the advantage of using all 10 SATA ports and having spare disks in case of a failure.

Option 2 maximizes the available storage. The WD Reds are new and reliable drives and I expect to have a spare 2 TB on hand in the next couple of months. A RAID Z2 configuration for 4 drives has a lower risk of dataloss in the case of 2 drive failing.

I am leaning towards using option 2 to be more futureproof with having 9,3 TB of available storage assuming I buy a spare drive in next couple of months.

But this is not the end of my thought process. Because I am also thinking backups. At the moment my backups are manual and intermittent. This is not a big deal because the pool doesn't change that often. However I have a hybrid manual/offsite backup system. The external 4 TB usb harddrive is at my home and I take it to my parents every once in a while where it stays for some time before I pick it up to update the backup. I am considering another option and that is to has the same amount of storage I have now but allows me to add an backup server in between.

Server 1: Dual Xeon setup with five 2 TB WD Red RAID Z2 with 5,5 TB
Server 2: Q6600 setup with five 1 TB drives RAID Z2 with 3,7 TB (or some kind of RAID 6 with another OS since it doesn't meet the requirements for FreeNAS)
Offsite Backup: external 4 TB drive at my parents' made with data from server 1

This has the downside of using an unreliable non-ECC system as a backup server. Although having some kind of backup may be better than having no backup at al. I don't expect to be running out of space soon given the speed of the growth of the data. Right now about 3-3.5TB is in use.

Offsite backup to Crashplan with an intermediary computer is also a possibility. Because the server is not used say 80-90% of the time I choose to simply turn it of instead of using it as a glorified space heater. On average I turn the server on about 3 times a week for a couple of hours.

Another way of doing offsite backups is place a low power atom netbook at my parents' and plug in the 4 TB external harddrive, configure it with linux and let it do nightly backups from server 1.

So what do you guys think about all this? Should I build one massive storage server and just buy a second 4 TB external harddrive to keep up with the intermittent manual and semi-offsite solution? Or do I setup server 1 as the optimally configured system with a cold spare on hand and have a possibly-unreliable server 2 with daily backups and a dedicated off-site backup on a 4 TB external USB drive in a low-power atom laptop at my parents' house? I have to make do with the equipment I have on hand and I'm trying not to end up in the hall of infamy ;)
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I am not a fan of using hot spares myself. Only reason to me to use a hot spare would be if the system is located remotely/off-site to where it would be an issue getting to it physically. Others may differ on this, but that is my take. So my vote is for Option #2.

Since you have an offsite backup routine, I would not think it would hurt to use Server 2 for now (even though it does not have ECC). I would just recommend that you use Server 1 as the source for the off-site backup, since this will help prevent contamination of the data (if you copied from Server 2 to the external drive).

Keep in mind that Server 2 and the External 4TB are smaller than what Server 1 is capable of so eventually that would need to be addressed.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't recommend the Areca. I had one (1260ML-24 to be exact) and I almost lost my pool due to the Areca. I'd swap it for an M1015.

Keep in mind that because of the age of that hardware, you're going to have bottlenecks with your FSB. Additionally, FB-DIMMs use a CRAPTON of power, and therefore give off a CRAPTON of heat. If you have the hardware already, great. But I wouldn't pay for that stuff. There's a reason why that stuff is dirt cheap on ebay. Anyone that keeps up with this stuff knows that you don't need a 500w heater in your server room. ;)
 
Joined
Jan 28, 2016
Messages
3
Just to follow-up for future reference. I ended up assembling the server and plugging in the 5 2TB WD drives with the existing pool and bootdrive and choose to keep the 6th 2TB drive on the shelf as an replacement. After running everything for a short while I discovered two issues:
1) I originally started with 512 byte sector 1TB Samsung drives and had upgraded the pool by adding a 2TB drive and replacing the drives one by one. I ended up with a volume with 512 byte sectors on drives with 4K sectors. Not ideal.
2) The Xeon's kept overheating, something I hadn't experienced ever before on any of my builds. I had used the stock heatsinks but replaced the 2 noisy stock fans with a big 140 mm fan blowing on to the heatsinks and FB-DIMM's. Although a reapplication of thermal compound and reseating of the heatsinks gave me some improvement I had to place the loud stock Intel fans back on the heatsinks to stop getting the loud overheating alarm. I would prefer to have some big heatsink on those chips (like the Thermalright 120 Extreme I had on my Q6600 which worked like a charm with the fan silently at 5V) but the options for LGA771 are very limited. Dare I say I might even consider two AIO liquid coolers?

So, then I did a "hybrid" transfer. Note: this is a do as I say, not as I do. I am fully aware the following isn't best practice and dangerous at best. My fallback was a full separate backup on the 4TB USB.

My SATA availability:
6 SATA ports on the motherboard and 4 SATA ports on the Arcea (JBOD mode, it doesn't pass through SMART info)

1. Starting point: 5 2TB WD Red on the motherboard SATA ports with a RAID-Z2 volume
2. Intermediate point: 5 2TB WD Red + 1 1TB Samsung on the motherboard SATA ports and 4 1TB Samsung on the Arcea
I created a RAID-Z volume on the 5 Samsung drives and used rsync to copy my data from the WD array to the Samsung array. After having checksummed different key files scattered over the pool I continued.
3. Intermediate point 2: 6 2TB WD Red on the motherboard SATA ports and 4 1TB Samsung on the Arcea
I removed the Samsung disk from the motherboard SATA. This leaved the Samsung pool degraded but gave me the SATA port I needed for the 6 drive WD array I wanted to end up with. I created a fresh RAID-Z2 with all 6 WD disks (with 4K sectors) and transferred the files with rsync from the Samsung pool to the WD pool.

I choose this route for two reasons:
1) This way I had two full copies of my data, one on the Samsung array to transfer to the new WD array and a separate backup. Had I chosen to just use the USB backup, destroy the 5x 2TB WD array and recreate with the 6th drive I would have had only one copy of my data during that time. And even though the second copy on the Samsung array was technically "at risk", statistically the chance of one of the Samsung drives dying on me while the pool was degraded during the 6 hour process was so slim I just went with it.
2) Transferring the data with rsync internally through the console from one pool to the other was faster and didn't gave me any permissions headache with e.g. hidden files than transferring the data over GbitLan to and from the USB drive.

After that I removed the Samsung drives and the Arcea RAID card and everything is hunky dory :)

And to reiterate, I know that intentionally putting your pool in a degraded state is playing with fire. It was a calculated risk and I would not have done this had I not made the backup to the 4TB USB drive. In a way, having to work with the equipment I had on hand and doing it the way that I did worked out for me because I now had 2 copies of my data instead of one and the transfer times were much faster so the whole process took only 6 hours. And even though the Arcea card did not pass through SMART data (making me wonder what else it didn't pass through) it behaved itself. I checksummed the data every step of the way comparing the files to the originals and found no data transfer issues. But while it worked for this short transfer trick I would not consider deploying it permanently (see cyberjocks post above).

I now have a 12TB raw storage main NAS setup with 6 drives in RAID-Z2 with plenty of possibility to upgrade in the future (lots of drive bays available, HBA card expandability, possibility to upgrade to two quadcore Xeon's). I may do a followup on the backup server, since I am also interested in making a render machine to transcode video footage. The Q6600 coupled with a NVIDIA graphicscard (cuda support in Adobe Media Encoder) could perform nicely with an added task of being a simple backup server.
 
Last edited:

TheKiwi

Explorer
Joined
Dec 27, 2015
Messages
54
That's a pretty retro system, wow!

I think something might be wrong with your power meter though. You claim it reaches 66W while at 40% load, which sounds impossible on the hardware it has. Each one of those DDR2 FBDIMMS takes 5-8w of power, and you are using 8 of them. Then there's the multitude of power-hungry old hardware on the board, like the 3oW memory controller and the 12W I/O controller (these are peak power figures, but they'll be close enough when you are pushing both of them, which you are). On top of that you have the HDDs, PSU inefficiencies, fans, the SATA card, and the old 65nm CPUs!

Basically, that system should be using a hell of a lot more then 66W, even at idle.
 
Last edited:
Joined
Jan 28, 2016
Messages
3
That's a pretty retro system, wow!

I think something might be wrong with your power meter though. You claim it reaches 66W while at 40% load, which sounds impossible on the hardware it has. Each one of those DDR2 FBDIMMS takes 5-8w of power, and you are using 8 of them. Then there's the multitude of power-hungry old hardware on the board, like the 3oW memory controller and the 12W I/O controller (these are peak power figures, but they'll be close enough when you are pushing both of them, which you are). On top of that you have the HDDs, PSU inefficiencies, fans, the SATA card, and the old 65nm CPUs!

Basically, that system should be using a hell of a lot more then 66W, even at idle.

Funny you mention that. During the initial testing I had a surprisingly low power readout. However, after reconnecting the server in my office I the readings were around 200W (give or take load differences). Power meter was indeed giving me bad readings. So yes, it is more than I had taken into consideration when building the machine. Actually the power consumption has doubled, ouch... But in the end, given that the machine isn't powered 24/7, I can live with the added power draw if that is the price I pay for having this system. I figure it will serve me for the next 2-3 years before I will have/want to retire this machine and overhaul to 8TB drives in an state of the art machine with decent power consumption.
 
Status
Not open for further replies.
Top