Volume is in a degraded state

Status
Not open for further replies.

Paul.G

Dabbler
Joined
May 18, 2018
Messages
10
I recently did an upgrade from 9.3 -> 9.10, waited a day, and then upgraded to 11.1.
When doing a system check in the web GUI this morning, I noticed that the volume was in a "degraded" state.
The odd thing is that the volume consists of 11 x 3 terabyte drives - da0p2 - da10p2 - all of these drives have a status of "ONLINE".
There's one mysterious drive named "2435056653744321299" listed as "UNAVAIL", which I believe is causing the "Degraded" status
The boot drive is an 8 gig flash drive ("tough drive", I think it's called) and shows a status of "Healthy"
The server itself is a ixsystems 2U.

PG
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
What's the output of
Code:
zpool status
Please post it here in code tags.
 

Paul.G

Dabbler
Joined
May 18, 2018
Messages
10
Edit: as I'm typing this out, it struck me that when I went to install the OS to the 8 gig tough boot drive, that I probably installed it to the 12th disk in the disk array - note the da11p2 reference for the boot drive.
Is it possible to either remove the 12th drive from the array and have it run with just 11 drives? There's only about 1/4 of the total drive space used, so I don't need the extra space of the lost 3 terabytes.
Or, how can I back out out of using that 12th drive as the boot disk, add it back to the array, and reinstall the OS to the correct boot disk?

Unfortunately, the system is connected to a network that can't get out to the internet - I'll have to hand jam the info from that output:

pool: freenas-boot
state: online
scan: none-requested
config
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da11p2 ONLINE 0 0 0
errors: no known data errors

pool: store
that "mysterious" unavailable drive mentioned above has a reference to a previous format like the rest of the drives - i.e. "was /dev/gptid/822 ........"
 
Status
Not open for further replies.
Top