Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Checking for TLER, ERC, etc. support on a drive

diskdiddler

FreeNAS Guru
Joined
Jul 9, 2014
Messages
2,134
Thanks
127
Yep I thoguht I'd found a bug, I have.

The init script page is broken, I've created a new entry, surprise surprise it started working

Bug logged.
https://i.imgur.com/tbwSdKM.jpg (Somehow I have 'script' AND 'command' variables set?)

Furthermore, it /forgets/ if it's a script or command each time I edit the thing.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
1,939
Thanks
818
Yep I thoguht I'd found a bug, I have.

The init script page is broken, I've created a new entry, surprise surprise it started working

Bug logged.
https://i.imgur.com/tbwSdKM.jpg (Somehow I have 'script' AND 'command' variables set?)

Furthermore, it /forgets/ if it's a script or command each time I edit the thing.
Yeah, you're running a Beta/RC version of FreeNAS. I'm using 11.1-U6, which is relatively stable and problem-free. :cool:
 

diskdiddler

FreeNAS Guru
Joined
Jul 9, 2014
Messages
2,134
Thanks
127
Yeah, you're running a Beta/RC version of FreeNAS. I'm using 11.1-U6, which is relatively stable and problem-free. :cool:
Well it's working now . System is still odd so I'm not using it until it's perfect. But at least this past is tackled, thanks!
 
Joined
Jul 21, 2017
Messages
13
Thanks
1
Here's one for you. I think TLER is somewhat overrated. I know for example that the new (P420 and more recent) RAID "smart" controllers know how to handle gracefully a disabled TLER hdd and doesn't just drop it after 10 seconds of not answering as older controllers did.
The fact that some enterprise hard drives come with this function disabled and NOT persistent, as some of you found out here on this very thread, speak for this prominently.

Here's another story, this time from the Backblaze guys, which are using a mix of enterprise and Consumer grade drives in their very critical business and so far, no drama. In fact, some of you will be shocked to know that there isn't a lot of difference between failure rates on these drives.

Happy data crunching!
https://www.backblaze.com/b2/hard-drive-test-data.html

My 2c: Stop worrying about drive grades and TLER and other crap. Choosing the right RAID level is so much more important. Never ever use RAIDZ1/RAID5 and if you have enough hdd slots, always have a hot spare in your array.
 
Last edited:

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,799
Thanks
3,053
Here's one for you. I think TLER is somewhat overrated. I know for example that the new (P420 and more recent) RAID "smart" controllers know how to handle gracefully a disabled TLER hdd and doesn't just drop it after 10 seconds of not answering as older controllers did.
The fact that some enterprise hard drives come with this function disabled and NOT persistent, as some of you found out here on this very thread, speak for this prominently.

Here's another story, this time from the Backblaze guys, which are using a mix of enterprise and Consumer grade drives in their very critical business and so far, no drama. In fact, some of you will be shocked to know that there isn't a lot of difference between failure rates on these drives.

Happy data crunching!
https://www.backblaze.com/b2/hard-drive-test-data.html

My 2c: Stop worrying about drive grades and TLER and other crap. Choosing the right RAID level is so much more important. Never ever use RAIDZ1/RAID5 and if you have enough hdd slots, always have a hot spare in your array.
Backblaze probably considers their business to be "very critical" but really no one else believes that. Everybody knows that they're the K-mart of bulk storage, and they designed a server model to match.

There is really no graceful way to "handle" a non-TLER drive. The drive will cause all I/O operations to come to a screeching halt for a potentially ungodly long period of time, which may be fine if you're doing home word processing or looking at your family pictures.

However, these components are foundational bits to modern computing, and when you start to build structure on top of them, they become the weak point. For example, I'm the primary developer of Diablo, a Usenet server package that anyone who's signed up for more than one or two Usenet providers has used on the back end. When you have ten thousand clients retrieving articles from a bunch of 24-drive storage servers via frontends, and one drive takes an indefinitely long nap, you suddenly become very aware of how large TCP buffers and relatively high busy timeouts can have a very detrimental effect on other parts of the service. It's like when one cylinder of your car's engine suddenly breaks. The existence of other cylinders doesn't make it a non-event. Everything goes to hell very quickly and you need to remediate that right away.

In this modern era, a catatonic drive can stop VM's from running, or stop an entire office full of people from being able to work. This is not acceptable in many environments. It *is* acceptable in others. Being aware of this and what your tolerance level is for "everything just stopped" is just as important as selecting a proper RAID level, deciding between RAID or mirrors, etc. It's very likely that home users will have a high tolerance for freezes, a low tolerance for data loss, and a desire for cheap, which means that consumer-grade shucked drives are a fantastic choice for them. Some of us cheapskates that demand better performance are happy to buy TLER-capable disks that do not save the setting, and will happily script that stuff. I have some Synology DS416slim (2.5") units where I was sticking in the 2TB Samsung Spinpoints (now Seagate ST2000LM003) and they don't sticky TLER, but they do seem to honor it, so I have a script in there to set it. Hasn't been a problem.
 
Joined
Jul 21, 2017
Messages
13
Thanks
1
ly no one else believes tha
Backblaze probably considers their business to be "very critical" but really no one else believes that. Everybody knows that they're the K-mart of bulk storage, and they designed a server model to match.
.....
Have no clue what a K-mart is but.. have you actually looked at their data about their drives? There are consumer drive models that are proven by their use to be more reliable than their enterprise counterparts. Nothing to say about the other drives that are the right way around, i.e. the enterprise models being more reliable because the difference is super small.

About that "gracefulness" i was talking about earlier with the modern raid controllers. At work we have a bunch of low end servers built for storage backup and those are built with consumer drives. Mostly RAID1 but some RAID5. We've replaced a few drives, no drama. Also before replacing the I/O was not severely halted or anything, not at all. It worked transparently, the bad drives, sometimes with quite a lot of bad/remapped sectors I might add were just dropped from the array. Replaced, rebuild and that's it. WD Blacks mostly, but also some Seagates.

Of course for our critical storage we have Oracle/SUN stuff. Expensive as hell and maybe more reliable. A lot faster for sure (1.2 TB 15k SAS Drives)

Back to TLER. As I was saying, a lot of enterprise oriented SATA drives these days come with that disabled, and I really don't think it's a tragedy.
 
Last edited:

Kevin Horton

FreeNAS Guru
Joined
Dec 2, 2015
Messages
624
Thanks
224
About that "gracefulness" i was talking about earlier with the modern raid controllers. At work we have a bunch of low end servers built for storage backup and those are built with consumer drives. Mostly RAID1 but some RAID5. We've replaced a few drives, no drama. Also before replacing the I/O was not severely halted or anything, not at all. It worked transparently, the bad drives, sometimes with quite a lot of bad/remapped sectors I might add were just dropped from the array. Replaced, rebuild and that's it. WD Blacks mostly, but also some Seagates.
What version of FreeNAS are you using, what types of users and applications are using the FreeNAS server, how do you have the storage organized, how many instances of failing drives with TLER disabled has your FreeNAS server experienced, and how do that affect system response?
 
Joined
Jul 21, 2017
Messages
13
Thanks
1
Actually I'm using freenas only at home and that with WD RE 3TB Enterprise HDDs which I think have TLER (haven't bothered to check, as I said I don't feel that is important any more) in a 2 vdev mirror setup.
At work those consumer-grade-hdds backup machines are running RHEL with some NFS exports, and the drives are managed by HP RAID cards such as P420 or P440ar. Arranged either in RAID1 when 2 drives used or RAID5 when 4 or 5 drives used. (I know RAID5 is evil, i totally agree but the data on these machines is not critical, is just backup)
We've had 4 drives since 2012 failed. Definetely without TLER. Two of them had A LOT of reallocated sectors =>thus gracefulness
 

Arwen

FreeNAS Expert
Joined
May 17, 2014
Messages
1,117
Thanks
547
Please note that some software RAID, (including ZFS), tolerates long pauses better than some, (and probably older), hardware RAID.

Someone I knew back in 2014 had problems with his home hardware based RAID-5 disk array. (He was a professional Linux System Administrator.) Disks would drop out, and a simple re-attachment would restore redundancy. And not the same disks. So I suggested TLER or Western Digital Red drives, (they are available locally to him). He bought the WD Reds and all was fine.

Ideally, when reading a stripe from a RAID set, if one disk does not respond soon enough, employ data recovery. (Meaning read the Mirror, or Parity disk, and satisfy the disk request.) Then, if it happens too often, (or the read request never finishes), fail the disk.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,799
Thanks
3,053
Have no clue what a K-mart is but.. have you actually looked at their data about their drives? There are consumer drive models that are proven by their use to be more reliable than their enterprise counterparts. Nothing to say about the other drives that are the right way around, i.e. the enterprise models being more reliable because the difference is super small.

About that "gracefulness" i was talking about earlier with the modern raid controllers. At work we have a bunch of low end servers built for storage backup and those are built with consumer drives. Mostly RAID1 but some RAID5. We've replaced a few drives, no drama. Also before replacing the I/O was not severely halted or anything, not at all. It worked transparently, the bad drives, sometimes with quite a lot of bad/remapped sectors I might add were just dropped from the array. Replaced, rebuild and that's it. WD Blacks mostly, but also some Seagates.

Of course for our critical storage we have Oracle/SUN stuff. Expensive as hell and maybe more reliable. A lot faster for sure (1.2 TB 15k SAS Drives)

Back to TLER. As I was saying, a lot of enterprise oriented SATA drives these days come with that disabled, and I really don't think it's a tragedy.
K-mart, or Walmart (surely you've heard of the world's largest retailer), are budget oriented retailers that put low price above pretty much everything else, in my opinion this includes quality, probably value, maybe ethics/morals, etc.

As for Backblaze? Yes, I've looked at their data, and it's generally nonrepresentative crap from their little microcosm. Since there isn't a lot of other data publicly available, their data fills a void, but that doesn't really make it high quality, useful, or meaningful. Their drive acquisition strategies over time have been dicey, and have included things like shucking Costco drives, and placing them in drive arrays that have suffered from a variety of engineering issues including vibration, cooling, power, and connectivity issues. Drawing conclusions from this data is statistically better than shooting craps, but still fails in numerous ways.

Enterprise drives are problematic for a variety of reasons, not the least of which are that they tend to be 7200, 10K, or even 15K RPM whereas most consumer drives are 5400 or 5900, and are dissipating substantially less heat and vibration, which is a major environmental plus in the innards of a server. The Backblaze numbers will reflect higher-than-normal failure rates for 7200 drives because of the increased heat and vibration. Anyone running a large population of drives without a need for high random access workload knows to avoid the higher RPM drives, and at this point, with SSD eating that marketplace, the 10K/15K offerings have fallen off a cliff.

At this point in the game, it's very hard to make any meaningful statements. Hard drives are dying off, and the volume movers and profit centers that once existed are evaporating rapidly (/have already evaporated at a catastrophic rate). Two years ago we were finding Hitachi 8TB He8's inside WD USB enclosures. The "high end" of the consumer drive market is fairly thin, and is basically the flip side of the "enterprise" mass storage/archival storage coin. The number of assembly lines and supply chains for hard drives has been reduced alarmingly in the last decade. There are good reasons to think that the differentiation between some drive models is at most cosmetic with some different labeling and firmware (and price). The existence of on-prem hardware has been on a steady decline for nearly a decade as well, meaning that "enterprise" kit sales are not what they were, and the monolithic single-server uber-reliability applications that once dominated and mandated high reliability hardware has been replaced by cloud-centric, massive-scale, highly redundant and fault tolerant outsourced crap that's great right up 'til the point the Internet fails or a data center goes dark. Along with that, the cloud largely eschews the use of hardware RAID in favor of software solutions, or better yet, no solutions, and just making redundancy at the server level. All of these factors play into what we're seeing, which includes that high capacity "consumer" or "NAS" drives are very similar to low end "enterprise" "nearline" drives.

This doesn't really make TLER meaningless or useless, but it does help identify why the significance of TLER is less than it once was. This has no impact on the importance of TLER for the workloads where TLER is desirable or necessary. We'll continue to need the guarantee that I/O completes in a certain timeframe for many applications. This is seen in our arena as TLER, but is also present in the form of A/V-rated drives for DVR and surveillance video applications, etc.
 

Arwen

FreeNAS Expert
Joined
May 17, 2014
Messages
1,117
Thanks
547
K-mart, or Walmart (surely you've heard of the world's largest retailer), are budget oriented retailers that put low price above pretty much everything else, in my opinion this includes quality, probably value, maybe ethics/morals, etc.
...
He is in Bucharest, Romania.

I miss location information that used to be below the person's picture. That allowed me to understand some differences, (like not knowing about Walmart).
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,799
Thanks
3,053
Well, I specifically said K-mart, which I felt carried with it a somewhat different and better-fitting similarity. Backblaze is a low quality bulk storage provider. They provide oodles of space, cheaply, mostly-reliably as I hear it, and they specialize in that. They're the McDonald's of cheeseburgers. It mostly works and is generally acceptable for lots of purposes, but there are lots of places they don't really fit. I have nothing against Backblaze. Back in the mid-2000's I was one of the early builders of 24-in-4U SATA arrays, and did it on FreeBSD, and sold numerous units to various ISP's and Usenet service providers. It was this switch to SATA and distributed systems that lit off the Usenet retention wars of the 2000's, and petabytes were deployed. This worked for Usenet the same as it works for Backblaze. I am *extremely* familiar with (and totally appreciate) this strategy. Prior to this, companies like UseNetServer had been deploying enterprise grade SAN storage for their back end storage... wot a waste. Anyways, I am the kind of guy who calls a thing by what it is, and I am not afraid to call Backblaze as I see it.
 
Joined
Jul 21, 2017
Messages
13
Thanks
1
totally agree @jgreco and @Arwen
(I've since googled for K-mart, it's like Carrefour/Auchan here in EU I see)
But, Backblaze if you like it or not, have not had any catastrophic failures (meaning with loss of data for a client), even with those McDonald's drives.
That in itself should speak for the more importance of a strategy/architecture of storage than for the quality/designation of THE drive.
I'm building a new homelab basement NAS/VM hypervisor all-in-one out of a refurbished HP Proliant DL380p with 2x Ivy-EP Low-Power Decacore Xeons and 25 SFF (2.5 inch) hdd slots.
I'll be using a combination of 1TB HGST 5K1000 and some Toshiba laptop drives (low power models), both of them you can find them as recommended for some storage appliances. Example Thecus : http://www.thecus.ru/resources/64/?view=true
I'm still thinking about using the included P420 controller as RAID or in pass-through mode. I might not have a choice given my all-in-one architecture. we'll see. The biggest issue with using it in pass-through mode is that the Queue Depth is lowered to 32 from 1024. Using a LSI controller is a big no-no with HP servers because the fans go F22 Raptor mode with anything non-hp in the slots. So yeah, I'm inclined to using the controller in RAID or purchasing an HP H2xx JBOD controller.

So I'll be using 24 spinners and 1 solid state in that server. Either going with hw raid or zfs or md, it doesn't matter atm, this is how I'm going to create the array:
RAID level 6 (RaidZ2) with 22 spinners and 2 hot spares with 1 SSD as cache.
This is going to give me enough peace of mind that even without TLER, even using a hardware RAID, my array will be fine.
I have -any- two drives that can go down from the array at the same time and my data being ok. Then there are the spares that will take over in a somewhat fast enough rebuild because the drives are small enough and the server being used only by myself, will not be busy with I/O at all during rebuilding.
Also, I will have a 10 TB USB drive for backup, a drive that will spin-up only once every 3 days for the rsync of my highest importance data and then go back to sleep.

So you see, with this simple design I'm quite sure I can use McDonald's drives, If they ever build them, and I'll be fine.
Another thing worth mentioning is that even though my server will be sitting in a non controlled climate env (basement) given the low power xeons (70 W each) and the low power hdds (about 2 W per drive on LOAD) all will be chilly good.
I'm expecting somewhere between 70 to 120 W total usage from this server at any given time.
 
Last edited:

rogerh

FreeNAS Guru
Joined
Apr 18, 2014
Messages
1,069
Thanks
118
What I find surprising about TLER is that hard drive makers should ever (in recent years at least) have thought it useful to wait more than a fraction of a second before admitting failure. But if I can get the time down from ?30 seconds to 7 seconds I certainly see no disadvantage, even though my use of FreeNAS does not involve anything time critical.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,799
Thanks
3,053
What I find surprising about TLER is that hard drive makers should ever (in recent years at least) have thought it useful to wait more than a fraction of a second before admitting failure. But if I can get the time down from ?30 seconds to 7 seconds I certainly see no disadvantage, even though my use of FreeNAS does not involve anything time critical.
TLER is typically tunable to a time value, separately for read and write. Default is 7 seconds.

The problem with timeouts is that it is totally possible to stack a bunch of random sector I/O to a hard disk that makes this difficult. If we ignore SAS and just look at SATA NCQ, for example, which allows 32 outstanding commands, it's worth noting that a fast hard drive might be able to manage 50 seeks per second (~20ms/ea) but this implies that you could manage to cause nearly a second's delay for the last request fulfilled just by stuffing random read requests into the queue. Worse, because you can continue to stuff commands into newly available slots, and because NCQ typically tries to favor the most easily fulfilled requests, there can be cases where a command may be held a significantly long period of time before being serviced.

That's with a properly functioning hard disk.

So we have multiple factors to consider, which include the data consumer's acceptable service time (which can be relatively low for things like iSCSI backend storage), how the RAID controller reacts, whether or not TLER is available, what the TLER timeout is, and just how badly the disk is damaged. TLER won't really be useful on a disk where there are a large number of errors, for example, because especially at seven seconds timeout, you can end up with a huge backlog of I/O. The interplay of "when to give up" is difficult.

Large retry times still make sense for desktop computing and perhaps other non-redundant computing scenarios. Where redundancy is not available to recover the information, it's probably reasonable to try as hard as you reasonably can. I can see scenarios in cloud computing, in particular, where you might want a disk to try harder than "fraction of a second" but also not "lock up for 30-60 seconds" to recover nonredundant data before calling it quits. I wonder how often sysadmins actually worry about this kind of thing, though, or do they just go with default behaviours.
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,356
Thanks
2,994
I wonder how often sysadmins actually worry about this kind of thing, though, or do they just go with default behaviours.
I think that many are not even aware that these things are adjustable and simply go with the defaults.
 

microserf

FreeNAS Aware
Joined
Dec 7, 2018
Messages
39
Thanks
11
Seagate Exos X12 12TB SATA (ST12000NM0007)
Code:
# smartctl -l scterc /dev/da0
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Error Recovery Control:
           Read:    100 (10.0 seconds)
          Write:    100 (10.0 seconds)

Samsung 1TB 860 EVO (MZ-76E1T0B/AM)
Code:
# smartctl -l scterc /dev/ada0
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Error Recovery Control:
           Read: Disabled
          Write: Disabled

I'd never considered SSDs in the context of this discussion...and then my curiosity got the better of me. I checked and now I'm pondering. Opinions?
 
Joined
Dec 26, 2015
Messages
12
Thanks
2
I am trying to use this script
https://forums.freenas.org/index.ph...-tler-is-always-enabled-on-your-drives.43494/

It runs perfectly from a shell, but it doesn't apply if I add it to Init/Shutdown Scripts as a command (post init):

/mnt/tank/usr/scripts/set_tler.py 7 7

the script is not performed after start.
I can start it manually with this exact command from the shell and it works perfectly.

Am I doing something wrong?

I am running Freenas 11.2 stable
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
1,939
Thanks
818
I am trying to use this script
https://forums.freenas.org/index.ph...-tler-is-always-enabled-on-your-drives.43494/

It runs perfectly from a shell, but it doesn't apply if I add it to Init/Shutdown Scripts as a command (post init):

/mnt/tank/usr/scripts/set_tler.py 7 7

the script is not performed after start.
I can start it manually with this exact command from the shell and it works perfectly.

Am I doing something wrong?

I am running Freenas 11.2 stable
Make sure your set_tler.py script specifies the full path to smartctl, which should be /usr/local/sbin/smartctl.
 
Top