What does this really mean ? Device: /dev/ada2, 3 Currently unreadable (pending) sectors

Simon Bingham

Dabbler
Joined
Sep 21, 2018
Messages
15
Alert System

  • CRITICAL: June 7, 2020, 2:11 a.m. - Device: /dev/ada2, 3 Currently unreadable (pending) sectors

  • WARNING: June 7, 2020, 2:11 a.m. - The capacity for the volume 'NAS' is currently at 81%, while the recommended value is below 80%.

Should I replace the drive ?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes. The drive is failing to return data from 3 bad sectors, and can't remap the data to remaining good sectors.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Should I replace the drive ?

Indeed, that drive is failing. You need to replace it.

  • WARNING: June 7, 2020, 2:11 a.m. - The capacity for the volume 'NAS' is currently at 81%, while the recommended value is below 80%.

But do not overlook that one either. Some people lost their pool for ignoring it. They kept piling up data and reached a level from where the pool was unable to operate. ZFS is copy on write, so need space for basically every action. Because it is often long and difficult to add space in a pool, better you start working on that one ASAP. To get extra hard drives, test them properly, get a system with enough bay to plug them in, maybe bigger power supply, replacing all the drives in the same vDev for auto-expand or more, these tasks require some time.
 

Simon Bingham

Dabbler
Joined
Sep 21, 2018
Messages
15
Indeed, that drive is failing. You need to replace it.



But do not overlook that one either. Some people lost their pool for ignoring it. They kept piling up data and reached a level from where the pool was unable to operate. ZFS is copy on write, so need space for basically every action. Because it is often long and difficult to add space in a pool, better you start working on that one ASAP. To get extra hard drives, test them properly, get a system with enough bay to plug them in, maybe bigger power supply, replacing all the drives in the same vDev for auto-expand or more, these tasks require some time.
thankyou, so is my system not longer fault tollerany ( even when i replace the failing drive ? )
 

Matt_G

Explorer
Joined
Jan 24, 2016
Messages
65
Once you replace that failing drive, your original level of fault tolerance will be restored.
If you are running RAIDZ2 or Z3 you still have some fault tolerance now.

What type of pool are you running?
If you don't mind, post the output of zpool status -v in code tags.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
so is my system not longer fault tollerany

No pool is tolerant to a 100% load. You must never fully load a pool, no matter its size, the redundancy, vdev type, pool structure or whatever else.
 

Simon Bingham

Dabbler
Joined
Sep 21, 2018
Messages
15
Once you replace that failing drive, your original level of fault tolerance will be restored.
If you are running RAIDZ2 or Z3 you still have some fault tolerance now.

What type of pool are you running?
If you don't mind, post the output of zpool status -v in code tags.

Thanks I've ordered a new drive, so to replace it what do i do ? Just physically replace ? will the freenas software take care of the rest ?

[root@freenas] ~# zpool status -v
pool: NAS
state: ONLINE
scan: scrub repaired 1.29M in 13h8m with 0 errors on Sun May 3 13:08:36 2020
config:

NAME STATE READ WRITE CKSUM
NAS ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/62f5e120-bb0b-11e5-bcda-645106d80558 ONLINE 0 0 0
gptid/65702cc9-bb0b-11e5-bcda-645106d80558 ONLINE 0 0 0
gptid/66f7aeda-bb0b-11e5-bcda-645106d80558 ONLINE 0 0 0
gptid/69052478-bb0b-11e5-bcda-645106d80558 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Thu May 28 03:46:08 2020
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors
[root@freenas] ~#
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
so to replace it what do i do ? Just physically replace ?

Nope, surely not.

Do you have extra bays that will let you plug the replacement drive with the other ones ?
Is your config hotplug capable ?

If that is No and No, what you need to do is to offline the drive in the FreeNAS WebUI, power down the server, physically replace the drive, power back On and replace the offline drive with the new one. Then only will FreeNAS starts the re-silvering process and only once that process is over will your pool be back Healthy.

If you have extra bay for that drive and / or you setup is hotplug capable, that process may be optimized in different ways.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Top