Vrakfall
Dabbler
- Joined
- Mar 2, 2014
- Messages
- 42
I saw nothing really going that way, it's just my felling after all the disks I've seen failing, consumer grade only. Nothing precise, ofc, and fully opinionated. I guess I'm looking for stats that prove me right or wrong, like the ones scrappy gave.If you saw anything on the internet stating this, we would be interested in it. I'm sure Seagate may have the popular vote for Enterprise class drives but it's my gut feeling they do not for Consumer class. One other warning is we generally stay away from drives having the Shingle Recording method. It's not that we feel it's bad but mostly due to it not being very proven.
Thank you for these stats, it's very interesting. I'd also be interested in stats that covers all brands in a more balanced way. :)There are areas where Seagate reliability has improved recently. Take BackBlaze's hard drive failure report Q1 2017 for example. With the exception of one 8TB model, most Seagate drives 4TB and larger are proving themselves to be reliable. On a side note: those 4TB ST4000DM000 desktop drives are usually very cheap to buy online. Even though they aren't the most reliable 4TB drive on the market, if you have good storage redundancy and a proper backup, I see no serious reason not to buy them.
Alright, thank you. Then I don't know how the SMART tests were activated so often on my drives. :/The SMART service simply checks the SMART attributes that every modern hard drive maintains internally. SMART tests are a different beast and must be configured manually.
Ouich. :s Where is that thread btw? I couldn't find it. Also, couldn't it just fall under the "possible failure rate"? Does that mean I shouldn't buy one from that brand?FWIW, anecdotally, I just had one of my new 8 IronWolfs spontaneously die after about 3 weeks of use. It's currently being replaced by the retailer and I'll start an IronWolf thread when the process is complete.
My 8 Seagate NAS-HD (which afaict are the same things) are about 12 months old now and not had an issue.
Don't recall ever having an issue with dozens of reds.
If all things are roughly equal I would take a 7200 RPM over 5400/5900 any day of the week. If enterprise was 10% - 20% more I would buy Enterprise without a second thought.
Why use 7200 RPM?
Only use NAS drives if you care about reliability and recoverability. The cheap drives do not do TLER and in a Z1 with 10TB drives you have a higher failure rate of vdev - 2 drive failure and you are screwed. I have seen it happen more than once and I would sure hate to hear it happen again.
- They usually come with 5 year warranty.
- They are made with higher quality parts and design.
- Because of the above you will generally see better longevity and fewer problems.
- Less down time. 2 million vs. 1 million (or less)
- *SOME* 7200 use less electricity than the "favored" drives you hear people talk about here. See my spreadsheet to see which ones.
- Generally speaking, they use a small amount more of electricity - $2-10 per drive per year. A5 drive RAIDZ2 might cost you only $20-30 more in electricity,
- 20-30% faster.
- Faster scrubs
- Faster replace of new drive into RAIDZx vdev
- If you get Enterprise drives they are designed to last and have fewer problems vs. commercial or home brands.
I don't know who to trust when it comes to RPMs now. xD I guess both are ok depending on the capacity of the drive and the $/TB of it?You make some good comments however I don't agree with all your statements. But everyone is allowed to have an opinion.
Interesting, I think I'll try that after my holidays. Do you have any link to a howto with commands and stuff like that? Do you use commands like `srm` or more low level ones?Some thoughts about disk drive "recovery" when you see errors occur.
If the errors are from badblocks, generally, they tend to come in groups/clusters of badblocks based on physical proximity on the disk platter.
What does that mean? The disk platter has a magnetic coating applied to it. When an area fails, the areas around it - think 2 dimensions- (simplistic) in front and behind, to right and to left are areas that could easily fail, also. I expect them to fail, its just a matter of time. Hopefully it is just in a small group so I might end up with 5-10 badblocks in same locale.
At a minimum, take the drive out of the vdev. I usually do an erase (write all zero's) on the drive to see where else other badblocks might be. This will also reset any read flagged badblocks. It will also help fix up any newly found badblocks.
Preferred would be to run badblocks, from the shell, on it. I run 3 patterns on it and it has the best chance to really scrub the platters with a variety of patterns on the disk. This can take days or longer depending on size of disk drive. If you run this and see a few more badblocks appear and all in the same area, it would give me some security that the drive would probably be usable as a vdev member.
Some vendors provide spare "local" revector badblock areas that are on same cylinder. This enables revectored badblocks performance to stay the same. If the cynlinder revector area is full it will then go to a large badblock revector area that is in another place on disk (beginning or end). This affects performance because heads must move to the new location when it is used.
For home usage, this is not a big deal. For enterprise database usage it could affect writes, as the slowest disk drive (e.g. the one that must move the heads to a new location then back) will cause the user I/O to disk on all drives to be seem slower because of that one drive. The drives without badblocks will finish their writes and the revectored one will have multiple internal disk writes to hide the badblock from the O/S.
If the revector badblock area becomes full, the the badblock file comes into affect. If this occurs, dump the disk for NAS purposes, IMHO.
Not all manufacturers will do the above process. I know of multiple manufacturers that do it for their enterprise drives. I cannot promise that home or commercial grade will do this.
For what it is worth...