UFS raid DEGRADED , meaning and action?

Status
Not open for further replies.

skimon

Dabbler
Joined
Jun 3, 2012
Messages
37
Hello,

I just built a freeNAS box 2 weeks ago and used a UFS mirror with 2 drives. Just received an error that "The volume X (UFS) status is DEGRADED".

So I did some googling and although I don't see anything definitive it suggests that one of the HDs may have failed (already?). However the GUI doesnt seem to indicate which drive failed, or any particular reason for how it decided that it failed. smartctl -a /dev/ada0 and /dev/ada1 shows all the smart tests passed on the disks.

1) So how do I know what the cause of the "degradation" was and how to rectify it - since SMART seems to report no error on either disk?

2) What should I do to rectify the situation?

Thanks for any help.

cheers

---

freeNAS-8.0.4-RELEASE-p2-x64 (11367)
 

skimon

Dabbler
Joined
Jun 3, 2012
Messages
37
answer my own question for the benefit of others hopefully:

I discovered through using "uptime" that the box had been restarted which was the root cause of the corruption. As for the symptomn actually it turned out to be simply that the mirror was degraded and needed to be rebuilt - I could see this with:
"gmirror list -a"

it showed one of the disks as active and the other as synchronizing with a percentage indicator.

So looks like the steps to follow are, assuming you have smart tests setup.
Check the smart diagnostics from "smartctl -a /dev/ada0" and reperat for each disk, make sure you dont see any errors and you see "SMART overall-health self-assessment test result: PASSED"

Assuming that was ok , check the mirror set with "gmirror list -a"

I wish there was a more noob friendly message with the details of the failure in this case, theres basically no way to find out what happened from the gui. Also the last reboot log doesnt have history beyond the last reboot so you cannot see a cause of reboot.
 
Status
Not open for further replies.
Top