Drive was unavailable - now it's back online - do I need to do anything?

Status
Not open for further replies.

robindhood

Cadet
Joined
Oct 31, 2011
Messages
5
I was showing off the FreeNAS server last night and I pulled a drive out to show a friend the quality of the drive sleds ( it was powered down at the time ).
I guess when I put it back in, it did not reseat correctly in the drive cage. ( check out my profile to see the beast )

I went in and ran a "zpool status" and found 1 drive was marked "unavailable".

I stopped the current copy that was running and powered down the machine. I pulled the drive, hit it with compressed air and reseated it. It powered back on fine. I ran "zpool status" again and all drives were online. I checked in the UI and both of my sets report "healthy".

Do I need to take any action ( scrub that RAID set ) just to be on the safe side? The set in question is 8 drives, RAID-Z2. Will it "self heal" as it's running as the descriptions of the ZFS imply?


BTW - I'm loving this server!! Thank you all for your work on this project as well as all the great information available here in the forums!!
Hood
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
It *should* self heal if you give it enough time, but a scrub will fix it if you just want to be certain and don't mind waiting while it runs.
 

robindhood

Cadet
Joined
Oct 31, 2011
Messages
5
It *should* self heal if you give it enough time, but a scrub will fix it if you just want to be certain and don't mind waiting while it runs.

I'm patient, so it's scrubbing away. Thanks for the info and reply, protosd.
 

robindhood

Cadet
Joined
Oct 31, 2011
Messages
5
Just an FYI - the (8) 3TB drive array, Raid-Z2 ( approx 15.2 TiB ) took about 4.5 hours to scrub with around 8.75 TB of data on it.
 
Status
Not open for further replies.
Top