Hi,
I am running a FreeNAS 8.0.2 machine with 12x750gb drives on an Areca controller in JBOD mode, and four other drives using the SATA ports on the motherboard. At the moment I am around 9000km from the machine and not in the office, so I'm having to do things via telephone and SSH/HTTP. No problems.
Recently, one drive failed and the machine became unresponsive. I could not log in via SSH, nor by HTTP. As such, I instructed that the machine be powered down and the failed drive replaced by a never before used twin. As the system was unresponsive, the failed drive was not made offline or detached.
Upon booting the machine, FreeNAS starts and the volume is present and shared and so on and so forth. However, the volume status is degraded (as could be expected!) and I don't seem to be able to get the new drive working with the pool.
Here is some information:
zpool status -v
camcontrol devlist
glabel show
glabel status
I have bolded the entry for da11 in the "glabel show" output. It looks as though the drive is of a completely different configuration than the others. Is this likely to cause the following problems, and if so, how do I fix it - or does that get done by magic when the drive joins the volume?
Anyhow, I have tried zpool replace:
I've tried to find the correct way for using zpool replace but can't seem to find out how to turn 9588179411297975516 into something that zpool replace can handle. That is, assuming I'm going about using zpool replace in the correct circumstances.
I have tried detaching da11p2, both when the new drive is online and when the new drive is unavailable:
I've searched these forums and other places on the Internet for some tips but either I'm terrible at searching or I've otherwise missed a solution.
Can someone please point me in the right direction or explain where I'm going wrong?
I am reluctant to upgrade FreeNAS remotely, but if the consensus is that doing so may help solve this problem then I will try it and see what happens.
Thank you!
I am running a FreeNAS 8.0.2 machine with 12x750gb drives on an Areca controller in JBOD mode, and four other drives using the SATA ports on the motherboard. At the moment I am around 9000km from the machine and not in the office, so I'm having to do things via telephone and SSH/HTTP. No problems.
Recently, one drive failed and the machine became unresponsive. I could not log in via SSH, nor by HTTP. As such, I instructed that the machine be powered down and the failed drive replaced by a never before used twin. As the system was unresponsive, the failed drive was not made offline or detached.
Upon booting the machine, FreeNAS starts and the volume is present and shared and so on and so forth. However, the volume status is degraded (as could be expected!) and I don't seem to be able to get the new drive working with the pool.
Here is some information:
zpool status -v
Code:
NAME STATE READ WRITE CKSUM StoragePool DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 da0p2 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 da2p2 ONLINE 0 0 0 ada2p2 ONLINE 0 0 0 ada3p2 ONLINE 0 0 0 da3p2 ONLINE 0 0 0 da4p2 ONLINE 0 0 0 da5p2 ONLINE 0 0 0 da6p2 ONLINE 0 0 0 da7p2 ONLINE 0 0 0 da8p2 ONLINE 0 0 0 da9p2 ONLINE 0 0 0 da10p2 ONLINE 0 0 0 9588179411297975516 UNAVAIL 0 0 0 was /dev/da11p2
camcontrol devlist
Code:
Seagate ST3750640NS R001> at scbus0 target 0 lun 0 (da0,pass0) <Seagate ST3750640NS R001> at scbus0 target 1 lun 0 (da1,pass1) <Seagate ST3750640NS R001> at scbus0 target 2 lun 0 (da2,pass2) <Seagate ST3750640NS R001> at scbus0 target 3 lun 0 (da3,pass3) <Seagate ST3750640NS R001> at scbus0 target 4 lun 0 (da4,pass4) <Seagate ST3750640NS R001> at scbus0 target 5 lun 0 (da5,pass5) <Seagate ST3750640NS R001> at scbus0 target 6 lun 0 (da6,pass6) <Seagate ST3750640NS R001> at scbus0 target 7 lun 0 (da7,pass7) <Seagate ST3750640NS R001> at scbus0 target 8 lun 0 (da8,pass8) <Seagate ST3750640NS R001> at scbus0 target 9 lun 0 (da9,pass9) <Seagate ST3750640NS R001> at scbus0 target 10 lun 0 (da10,pass10) <Seagate ST3750640NS R001> at scbus0 target 11 lun 0 (da11,pass11) <Areca RAID controller R001> at scbus0 target 16 lun 0 (pass12) <ST3250620NS 3.AEG> at scbus2 target 0 lun 0 (ada0,pass13) <ST3250620NS 3.AEG> at scbus3 target 0 lun 0 (ada1,pass14) <ST3750640NS 3.AEG> at scbus5 target 0 lun 0 (ada2,pass15) <ST3750640NS 3.AEG> at scbus6 target 0 lun 0 (ada3,pass16) <PIONEER DVD-RW DVR-218L 1.00> at scbus7 target 0 lun 0 (cd0,pass17)
glabel show
Code:
=> 34 1465149101 da0 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da1 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da2 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da3 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da4 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da5 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da6 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da7 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da8 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da9 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 34 1465149101 da10 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 63 1465149105 da11 MBR (699G) 63 1465144002 1 !12 (699G) 1465144065 5103 - free - (2.5M) => 63 488397105 ada0 MBR (233G) 63 1930257 1 freebsd [active] (943M) 1930320 63 - free - (32K) 1930383 1930257 2 freebsd (943M) 3860640 3024 3 freebsd (1.5M) 3863664 41328 4 freebsd (20M) 3904992 484492176 - free - (231G) => 34 1465149101 ada2 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) a => 34 1465149101 ada3 GPT (699G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1460954703 2 freebsd-zfs (697G) => 0 1930257 ada0s1 BSD (943M) 0 16 - free - (8.0K) 16 1930241 1 !0 (943M) => 34 488397101 ada1 GPT (233G) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 484202703 2 freebsd-zfs (231G)
glabel status
Code:
ufs/FreeNASs3 N/A ada0s3 ufs/FreeNASs4 N/A ada0s4 ufs/FreeNASs1a N/A ada0s1a
I have bolded the entry for da11 in the "glabel show" output. It looks as though the drive is of a completely different configuration than the others. Is this likely to cause the following problems, and if so, how do I fix it - or does that get done by magic when the drive joins the volume?
Anyhow, I have tried zpool replace:
Code:
[root@storage] ~# zpool replace StoragePool /dev/da11p2 9588179411297975516 cannot open '9588179411297975516': no such GEOM provider must be a full path or shorthand device name
I've tried to find the correct way for using zpool replace but can't seem to find out how to turn 9588179411297975516 into something that zpool replace can handle. That is, assuming I'm going about using zpool replace in the correct circumstances.
I have tried detaching da11p2, both when the new drive is online and when the new drive is unavailable:
Code:
[root@storage] ~# zpool detach StoragePool da11p2 cannot detach da11p2: only applicable to mirror and replacing vdevs
I've searched these forums and other places on the Internet for some tips but either I'm terrible at searching or I've otherwise missed a solution.
Can someone please point me in the right direction or explain where I'm going wrong?
I am reluctant to upgrade FreeNAS remotely, but if the consensus is that doing so may help solve this problem then I will try it and see what happens.
Thank you!