trsupernothing
Explorer
- Joined
- Sep 5, 2013
- Messages
- 65
I have the worst type of personality when it comes to anything to do with technology. I am obsessed with making sure that I am doing something the right way and or the best way. For years now I have been quite happy with a synology raid that I purchased and set up to house my data. I pretty much archive almost any file I've ever needed and or used. My synology was a 5 bay with 3tb drives in a raid 5. Synology has breakout expansion boxes that I always anticipated adding additional drives this way as I am already out of space.
Recently I decided to build a second raid to act as an off-site backup as I know a raid is not a backup.. I have plans to setup a point to point wireless network between my home and a relative's home roughly 1000 feet away. Then I would place the second server at the relative's home and setup an rsync backup. It was the idea of creating a second raid that got me making decisions on whether or not I wanted to spend the money on another synology for this purpose. As it is I would have to spend an additional $1000 on breakout boxes for my existing synology (without drives) and then the backup server would also be $2000 without drives just to ensure I had the expandability in both my main raid and my backup raid.
So as I started investigating a diy nas I stumbled into the world (that I previously didn't even realize existed) of countless software solutions for a diy nas / raid system. (flexraid, unraid, snapraid, freenas, nas4free, zfsguru etc etc)
The bulk of my data is media. Reading through each of the listed softwares above I found myself weighing the pros and cons of each. Every time I would read something about ZFS and freenas, I'd get confused with the new terminology I had never heard before vdev, zpool, etc etc. This led me towards perceived easier software (flexraid mostly)
As I was saying earlier about my personality. Another aspect to it is that I MUST have my mind made up with regards to a pending project. All aspects must be agreed upon in my mind and bookmarked for purchasing. The choosing of a software for my raid borderline caused me mental anguish for four days. I couldn't read enough about the software that I was contemplating.
This is where I got into trouble. I've been building computers for over ten years now. I've never heard of bit rot. When I stumbled across bit rot, I was terrified. Here I was thinking my data was safe and sound in its raid. Here I thought the only thing that could happen to my data was a hard drive die or theft or fire. This was why I thought my backup solution would make my data bulletproof. The second I discovered the mere existence of bit rot....My decision was made. ZFS and freenas. I'm aware that bit rot may indeed be rare, but I have so many files, family photos etc, terabytes worth. Ten years from now I do not want to go open a file and find it corrupt and also corrupt on the backup simply because I had not accessed it in awhile.
So here I am, happy that I have decided on ZFS and freenas. Next thing for my to agonize over... hardware.
So my plan is to have a 24 drive system. The norco 4224 case seems like an easy "go-to" but I'm trying to convince myself to decide on quality and go with a supermicro 24 bay case.
4tb drives all around.
IBM m1015 cards obvious choice.
server motherboard with 32 gigs of ecc ram.
The two things that I am unsure of (and thus writing this post) are with regards to how many sas cards / connections to the motherboard for drives, and what raidz(?) configurations to decide on.
I like consistency and the thought of popping three imb m1015 cards in a board to handle the drives seemed like what would calm my ocd the best. I figured find a motherboard with a 16x pci, 8x pci and a 4x pci.
Then I found mixed reviews on the ibm m1015 in a 4x slot. I've seen some say it works and others say it doesn't.
I tried to find a motherboard with at least three 8x pci slots. I found a few, but they were all dual cpu boards.
Then I found this board....
http://www.newegg.com/Product/Product.aspx?Item=N82E16813151247&Tpk=S5512WGM2NR
It has a 16x and an 8x board which will support two ibm m1015s as well as an onboard LSI 2008 SAS
I think it would work to use two m1015's and then the onboard lsi to accomplish what I want. Input here is highly appreciated.
Now with regards to the raidz configuration.
Four 6 drive raidz2 ? Is this likely the best bet with regards to redundancy and storage space?
I've seen tips like this from the wiki...
3-disk RAID-Z = 128KiB / 2 = 64KiB = good
4-disk RAID-Z = 128KiB / 3 = ~43KiB = BAD!
5-disk RAID-Z = 128KiB / 4 = 32KiB = good
9-disk RAID-Z = 128KiB / 8 = 16KiB = good
4-disk RAID-Z2 = 128KiB / 2 = 64KiB = good
5-disk RAID-Z2 = 128KiB / 3 = ~43KiB = BAD!
6-disk RAID-Z2 = 128KiB / 4 = 32KiB = good
10-disk RAID-Z2 = 128KiB / 8 = 16KiB = good
And I've seen it suggested that a single raidz anything should not exceed 9 drives.
Any tips here would be greatly appreciated.
Thanks for the help guys.
Recently I decided to build a second raid to act as an off-site backup as I know a raid is not a backup.. I have plans to setup a point to point wireless network between my home and a relative's home roughly 1000 feet away. Then I would place the second server at the relative's home and setup an rsync backup. It was the idea of creating a second raid that got me making decisions on whether or not I wanted to spend the money on another synology for this purpose. As it is I would have to spend an additional $1000 on breakout boxes for my existing synology (without drives) and then the backup server would also be $2000 without drives just to ensure I had the expandability in both my main raid and my backup raid.
So as I started investigating a diy nas I stumbled into the world (that I previously didn't even realize existed) of countless software solutions for a diy nas / raid system. (flexraid, unraid, snapraid, freenas, nas4free, zfsguru etc etc)
The bulk of my data is media. Reading through each of the listed softwares above I found myself weighing the pros and cons of each. Every time I would read something about ZFS and freenas, I'd get confused with the new terminology I had never heard before vdev, zpool, etc etc. This led me towards perceived easier software (flexraid mostly)
As I was saying earlier about my personality. Another aspect to it is that I MUST have my mind made up with regards to a pending project. All aspects must be agreed upon in my mind and bookmarked for purchasing. The choosing of a software for my raid borderline caused me mental anguish for four days. I couldn't read enough about the software that I was contemplating.
This is where I got into trouble. I've been building computers for over ten years now. I've never heard of bit rot. When I stumbled across bit rot, I was terrified. Here I was thinking my data was safe and sound in its raid. Here I thought the only thing that could happen to my data was a hard drive die or theft or fire. This was why I thought my backup solution would make my data bulletproof. The second I discovered the mere existence of bit rot....My decision was made. ZFS and freenas. I'm aware that bit rot may indeed be rare, but I have so many files, family photos etc, terabytes worth. Ten years from now I do not want to go open a file and find it corrupt and also corrupt on the backup simply because I had not accessed it in awhile.
So here I am, happy that I have decided on ZFS and freenas. Next thing for my to agonize over... hardware.
So my plan is to have a 24 drive system. The norco 4224 case seems like an easy "go-to" but I'm trying to convince myself to decide on quality and go with a supermicro 24 bay case.
4tb drives all around.
IBM m1015 cards obvious choice.
server motherboard with 32 gigs of ecc ram.
The two things that I am unsure of (and thus writing this post) are with regards to how many sas cards / connections to the motherboard for drives, and what raidz(?) configurations to decide on.
I like consistency and the thought of popping three imb m1015 cards in a board to handle the drives seemed like what would calm my ocd the best. I figured find a motherboard with a 16x pci, 8x pci and a 4x pci.
Then I found mixed reviews on the ibm m1015 in a 4x slot. I've seen some say it works and others say it doesn't.
I tried to find a motherboard with at least three 8x pci slots. I found a few, but they were all dual cpu boards.
Then I found this board....
http://www.newegg.com/Product/Product.aspx?Item=N82E16813151247&Tpk=S5512WGM2NR
It has a 16x and an 8x board which will support two ibm m1015s as well as an onboard LSI 2008 SAS
I think it would work to use two m1015's and then the onboard lsi to accomplish what I want. Input here is highly appreciated.
Now with regards to the raidz configuration.
Four 6 drive raidz2 ? Is this likely the best bet with regards to redundancy and storage space?
I've seen tips like this from the wiki...
- Start a single-parity RAIDZ (raidz) configuration at 3 disks (2+1)
- Start a double-parity RAIDZ (raidz2) configuration at 5 disks (3+2)
- Start a triple-parity RAIDZ (raidz3) configuration at 8 disks (5+3)
- (N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 8
3-disk RAID-Z = 128KiB / 2 = 64KiB = good
4-disk RAID-Z = 128KiB / 3 = ~43KiB = BAD!
5-disk RAID-Z = 128KiB / 4 = 32KiB = good
9-disk RAID-Z = 128KiB / 8 = 16KiB = good
4-disk RAID-Z2 = 128KiB / 2 = 64KiB = good
5-disk RAID-Z2 = 128KiB / 3 = ~43KiB = BAD!
6-disk RAID-Z2 = 128KiB / 4 = 32KiB = good
10-disk RAID-Z2 = 128KiB / 8 = 16KiB = good
And I've seen it suggested that a single raidz anything should not exceed 9 drives.
Any tips here would be greatly appreciated.
Thanks for the help guys.