The pool is Degraded, but the Raid is optimal.

alexandr.l

Cadet
Joined
Apr 16, 2020
Messages
7
There is a server for backups with a controller to which a 24-disk disk shelf is connected. On the controller, a raid6 out of 7 4TB-disks was collected.
The state of the raid is optimal. Each disk is checked by smartctl utility, there are no errors anywhere. We have the ECC-memory, launched a memtest, there were no errors.
Launched Scrub, which worked for more than 21 days, but the status of the pool on Healthy has not changed.
Questions:
1. What causes pool degradation?
2. How to speed up the scrub?
3. How to track possible pool errors?

Version: FreeNAS-11.3-RELEASE
Server Supermicro 6016T-NTF 2*Xeon E5645 24 CPU Cores / 65483 MB RAM
RAID-controller LSI MegaRAID SAS 9260-4i
Disk shelf Huawei OceanStor S2300 DAE12435U4
 

Attachments

  • raid.txt
    2.7 KB · Views: 204
  • smart.txt
    5.6 KB · Views: 187
  • zpool.txt
    712 bytes · Views: 162

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
You really really need to read https://www.ixsystems.com/community...bas-and-why-cant-i-use-a-raid-controller.139/

ZFS does not function through RAID controllers. All of its checksumming and data repair functionality no longer functions when you go through a RAID.

Your RAID controller may have a way to check all drives in the RAID and do its own scrub, to figure out where the error lies. Do that.

If you wish to stay with ZFS, you should set up a new pool on an HBA, and do a send/receive from your RAID pool to the direct pool, Soonest(tm). Then destroy the RAID pool. You may still have errors in some files, there is nothing ZFS can do to repair those if it doesn't have access to the physical disks.
 
Top