sonnyinaz6
Dabbler
- Joined
- Aug 28, 2015
- Messages
- 16
Yes, it is a killer, really overkill server, 'at this point' in time. The bump I've seen in my electric bill after firing it up the first time and letting it run was about 22.00 But it's worth it to me. I'll be updating it however over time. I already have a 16 bay sata storage shelf for it that I'll eventually be adding to. This is because I can buy 3.5" sata brand new for 65.00. The sas drives are minimum triple that. Eventually though they'll come down in price as sizes increases and I'll migrate over to those. Eventually I'm going to want about 16tb and I fully plan to have that at about 75% utilized, so the memory, yea, gonna need it.
Here's where we differ momentarily... After working in data centers for over 10 years now, I can without any doubt, none, state from experience, that I have replaced over 1000 hard drives in servers, raid 1, raid 10, raid 5, and raid 6, that had a failed, or predictive failure, that generally gets replaced same day, to 8 months (I've had 2 companies that didn't replace drives in raid 1 that ran a year). Not 1 system.. that's 0, ever failed it's array during this 10 year period of time. Not 1. So with all the expertise I've seen, all the articles I've read about how important it is to have minimum 2 drive failure support (which is why raid 6 came into play) (most data centers rely on heavily protcted dasd now anyways) I can refute every word of it. I've worked in 4 massive data centers now without a fail, and no, after 10 years and over a 1000 failed drives replaced, can you say "you've been lucky". I don't care what anyone, or any company says, I have my facts, and that's what I rely on. Anyone can debate this point with me till their blue in the face, I have real world experience and believe in it.
However, I do agree with having more than 1 drive and eventually I will have. For that much data I'll eventually have 2 hot spares with 2 drive fail capabality, and I think zfs has room for 3?. For the moment though, even running a z1 I've managed to lose damn near 2tb from the 6tb I have, which is about 1tb more than I lose with a hardware raid 5. Can't afford to give up that much space at the moment.
Here's where we differ momentarily... After working in data centers for over 10 years now, I can without any doubt, none, state from experience, that I have replaced over 1000 hard drives in servers, raid 1, raid 10, raid 5, and raid 6, that had a failed, or predictive failure, that generally gets replaced same day, to 8 months (I've had 2 companies that didn't replace drives in raid 1 that ran a year). Not 1 system.. that's 0, ever failed it's array during this 10 year period of time. Not 1. So with all the expertise I've seen, all the articles I've read about how important it is to have minimum 2 drive failure support (which is why raid 6 came into play) (most data centers rely on heavily protcted dasd now anyways) I can refute every word of it. I've worked in 4 massive data centers now without a fail, and no, after 10 years and over a 1000 failed drives replaced, can you say "you've been lucky". I don't care what anyone, or any company says, I have my facts, and that's what I rely on. Anyone can debate this point with me till their blue in the face, I have real world experience and believe in it.
However, I do agree with having more than 1 drive and eventually I will have. For that much data I'll eventually have 2 hot spares with 2 drive fail capabality, and I think zfs has room for 3?. For the moment though, even running a z1 I've managed to lose damn near 2tb from the 6tb I have, which is about 1tb more than I lose with a hardware raid 5. Can't afford to give up that much space at the moment.