How to troubleshoot degraded Raidz volume?

Status
Not open for further replies.
Joined
Sep 6, 2011
Messages
1
We have:
OS Version: FreeBSD 8.2-RELEASE-p1
Platform: Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz
System Time: Tue Sep 6 23:29:28 PDT 2011
Uptime: 11:29PM up 2 days, 6:09, 0 users
Load Average: 0.00, 0.03, 0.12
FreeNAS Build: FreeNAS-8.0-RELEASE-amd64

with 3 x 2 TB caviar black drives in a raidz array called volume0 (all 3 drives are in the volume). All 3 drives are SATA mounted in the case (non-hot-swap). They are all new. We do have a spare drive, but we are unable to determine which, if any has failed.

On this volume we created 4 zfs datasets.

the GUI (Storage --> Active Volumes) now shows "Active Volumes 5" with all of them degraded. As a new user I can't post a screen shot, so here's the text version:

volume0 /mnt/volume0 47.6 GB (1%) 3.4 TB 3.4 TB DEGRADED
volume0 /mnt/volume0/virtual_machines 45.3 GB (1%) 3.4 TB 3.4 TB DEGRADED
volume0 /mnt/volume0/vm_backup 373.8 GB (9%) 3.4 TB 3.8 TB DEGRADED
volume0 /mnt/volume0/data_backup 29.6 GB (0%) 3.4 TB 3.4 TB DEGRADED
volume0 /mnt/volume0/general 31.2 GB (0%) 3.4 TB 3.4 TB DEGRADED

I have not been able to find out what's degraded about the array, and there appears to be no discussion of this anywhere.

Where do we start troubleshooting.
 

pallfreeman

Dabbler
Joined
Sep 1, 2011
Messages
38
Open a shell (either from the console or by ssh'ing in) and type "zpool status".

Doesn't help if you're expecting to be able to do this from the GUI, of course.
 
Status
Not open for further replies.
Top