unreadable (pending) sectors on Seagate 8TB bbarracuda

marcoFR

Cadet
Joined
Dec 4, 2023
Messages
5
Hi,

I have a RAID 1 with two 8TB Seagate Barracuda disks.
Everything was working fine and for 1 month, after each SRUB I had errors like this:

Device: /dev/sdd [SAT], 16 Currently unreadable (pending) sectors.​

Device: /dev/sdd [SAT], 16 Offline uncorrectable sectors.​

Device: /dev/sdd [SAT], ATA error count increased from 8 to 12.​


I changed the disk that had the most errors. the problem continues with the new disk. I have this type of warning at each SRUB ad on both disk.
What is happening?

Code:
@supernas:/$ sudo zpool status
  pool: monVolume
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub in progress since Sun Dec 17 00:00:05 2023
        3.89T scanned at 109M/s, 3.43T issued at 96.1M/s, 3.89T total
        744K repaired, 88.10% done, 01:24:14 to go
config:

        NAME                                      STATE     READ WRITE CKSUM
monVolume                                                   ONLINE       0     0     0
mirror-0                                                         ONLINE       4     0     0
e356fc0b-ee41-476f-82e2-63e82a0d7c2b  ONLINE       8     0    22  (repairing)
b0d85bed-487f-415c-ad33-c5ecafc389b9  ONLINE       0     0     8

errors: 1 data errors, use '-v' for a list


I Use the command
Code:
@supernas:/$ sudo zpool status -v


and I see an error on movie file. It's not a big problem but it bothers me on principle.

Code:
monVolume/stockage@auto-2023-12-05_00-00:/Medias/Series/*********/Saison 5/**********.avi



I don't use a raid controller. each disk is plugged directly into the motherboard. The connections are new. Possibly the problem may have started after a system update. Can you help me? I'm thinking to go back to Synology because it's such a headache.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, TrueNAS & ZFS can be more complex than all in one products from Synology or other vendors. This is partly because you bring your own hardware to TrueNAS, unless you buy a pre-assembled TrueNAS Mini or server from iX. The other complexity is that ZFS has limitations.

Now, on to your problem. Please list the exact hardware you are using. Include make and model. Especially the disk's exact model.


Some hardware does not work well for either TrueNAS or ZFS, (or both). A quick search indicates that the 8TB Seagate Barracuda disks use SMR technology, which causes problems for ZFS. So a disk model would help to either confirm or eliminate that as a problem.

When a user asks about a potential NAS configuration, and lists a SMR type disk, we let the user know that can be a problem with ZFS. But, we do still get users that built a TrueNAS server using what they thought were perfectly fine parts. Then to find out about SMR, (Shingled Magnetic Recording), disks, and their unsuitability for certain tasks.

At present, their is no perfect NAS... all have their quirks, costs and hardware preferences.
 

marcoFR

Cadet
Joined
Dec 4, 2023
Messages
5
Hi, thanks you for your reply,

the disk model is : st8000DM004
cpu : Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz LGA1151
mother board : ASUS H11OM
using RAID1 managed by truenas. No controller.

actually I see posts that talk about the SMR problem on these disks with truenas. Too bad I bought 3... Do you know if there are any optimizations to limit these errors?

thanks again.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Those disks are SMR.
Did you test them before trying to add them to your pool?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
actually I see posts that talk about the SMR problem on these disks with truenas. Too bad I bought 3... Do you know if there are any optimizations to limit these errors?
No, basically you can either use another free NAS software. Or replace the disks.

If these disks are giving you the amount of trouble you listed in your first post, they are just not usable with ZFS.


We get people who think just 1 of their SMR disks is "bad". One of the problems with SMR disks is that they get fragmented internally. Even to the point of causing slow downs long enough for ZFS to consider the disk faulty. So while 1 disk might be more fragmented today, all SMR disks are headed for slowness due to fragmentation.

In theory, changing the device R/W timeout would make SMR disks less error prone with ZFS. But, such a thing would not make another problem with SMR disks go away: The long re-silver, (disk replacement), times. Meaning SMR disks can be more than 10 times slower for disk replacements than CMR / PMR disks. Even taking a week!
 

Jamberry

Contributor
Joined
May 3, 2017
Messages
106
I had an SMR array for years a mirror configuration.
As long as you don't use SMR drives to replace a failed drive, it mostly works.
Simply because in a home lab you don't have that much constant ingress.

The bigger problem I had was that these drives (I had Seagate Archive) are not really made for a 24/7 system that has by default no standby. None of my drives lasted for much longer than 8-12 months before I had to replace them with a none SMR drive.
So even if you start out with SMR drives, sooner than you think, you will have replaced them with NAS or enterprise drives.
For your system it seems that time is now :smile:
 

marcoFR

Cadet
Joined
Dec 4, 2023
Messages
5
Thank you for your answers. I think I will switch to an ext4 partition system or something else. Especially to be able to use "file recovery" if I have no choice.
 
Top