Hi
Happy new year all.
I have read a fair few posts but haven't been able to find my scenario happening with anyone else.
Existing system
5 x 2TB hard drives and 1 x 32 GB UBS Stick for FreeNAS 8.3.0 p1
Earlier in the week I put in a 3TB hard drive to increase my space and wanted to test a hard drive redundancy/replacement before going on a long trip.
Put the 3TB hard drive in and it was imported into the stripe part of the volume and the size of my volume raised, as expected.
Then in the GUI I offlined one of the drives in the raidz1-0 successfully and tried to bring it back online and it said that the volume was active and I cant do anything, scrubbed and still nothing.
Found some forums saying that you cant replace the same disk with itself (old post) so got another 2TB hard drive and plugged it in and tried replacing it in the volume but couldnt get it to work, same problem the volume is active.
At this point in time I've taken the drives out and put them back in (tidying up the box) and there is a differently named disk offline (instead of ada2 its now ada4) but still same problem, I have onlined/offlined/replaced/imported and cant seem to get it to resilver the new drive or online the old one.
Whenever I try to online or replace the drive it says that it is active
And even via the Front end the Disk Replacement popup says "Replacing disk None" which makes me think something is out of alignment.
With the original disk in I "failed to unpack label 1" and 2 and the original named disk I was having problems with ada2p2 that now has offline = 1... did I repack them wrong?
Thanks for any assistance, hopefully that is enough information
Happy new year all.
I have read a fair few posts but haven't been able to find my scenario happening with anyone else.
Existing system
5 x 2TB hard drives and 1 x 32 GB UBS Stick for FreeNAS 8.3.0 p1
Earlier in the week I put in a 3TB hard drive to increase my space and wanted to test a hard drive redundancy/replacement before going on a long trip.
Put the 3TB hard drive in and it was imported into the stripe part of the volume and the size of my volume raised, as expected.
Then in the GUI I offlined one of the drives in the raidz1-0 successfully and tried to bring it back online and it said that the volume was active and I cant do anything, scrubbed and still nothing.
Found some forums saying that you cant replace the same disk with itself (old post) so got another 2TB hard drive and plugged it in and tried replacing it in the volume but couldnt get it to work, same problem the volume is active.
At this point in time I've taken the drives out and put them back in (tidying up the box) and there is a differently named disk offline (instead of ada2 its now ada4) but still same problem, I have onlined/offlined/replaced/imported and cant seem to get it to resilver the new drive or online the old one.
Code:
zpool status -v pool: share state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: scrub repaired 0 in 12h29m with 0 errors on Tue Jan 1 06:01:26 2013 config: NAME STATE READ WRITE CKSUM share DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 gptid/0ad73996-a192-11e0-adbe-a0000004a81b ONLINE 0 0 0 gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b ONLINE 0 0 0 14240238040549989919 OFFLINE 0 0 0 was /dev/dsk/gptid/8d75419f-5287-11e2-8c80-a0000004a81b gptid/71c97285-6a90-11e1-81cc-a0000004a81b ONLINE 0 0 0 gptid/7253f51a-6a90-11e1-81cc-a0000004a81b ONLINE 0 0 0 gptid/e8d44a6a-5266-11e2-9838-a0000004a81b ONLINE 0 0 0 errors: No known data errors [root@freenas] ~# camcontrol devlist <ST2000DL003-9VT166 CC32> at scbus2 target 0 lun 0 (pass0,ada0) <ST2000DL003-9VT166 CC32> at scbus3 target 0 lun 0 (pass1,ada1) <ST2000DL003-9VT166 CC32> at scbus4 target 0 lun 0 (pass2,ada2) <Hitachi HDS723030BLE640 MX6OAAB0> at scbus5 target 0 lun 0 (pass3,ada3) <ST2000DL003-9VT166 CC32> at scbus6 target 0 lun 0 (pass4,ada4) <ST2000DL003-9VT166 CC32> at scbus6 target 1 lun 0 (pass5,ada5) < Patriot Memory PMAP> at scbus9 target 0 lun 0 (pass6,da0) glabel status Name Status Components gptid/71c97285-6a90-11e1-81cc-a0000004a81b N/A ada0p2 gptid/7253f51a-6a90-11e1-81cc-a0000004a81b N/A ada1p2 gptid/0ad73996-a192-11e0-adbe-a0000004a81b N/A ada2p2 gptid/e8d44a6a-5266-11e2-9838-a0000004a81b N/A ada3p2 gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b N/A ada5p2 ufs/FreeNASs3 N/A da0s3 ufs/FreeNASs4 N/A da0s4 ufsid/5009cfdd91a55783 N/A da0s1a ufs/FreeNASs1a N/A da0s1a ufs/FreeNASs2a N/A da0s2a gptid/8daa7fd8-53bf-11e2-8488-a0000004a81b N/A ada4p2 gptid/8d9583f8-53bf-11e2-8488-a0000004a81b N/A ada4p1 gpart show => 34 3907029101 ada0 GPT (1.8T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834703 2 freebsd-zfs (1.8T) => 34 3907029101 ada1 GPT (1.8T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834703 2 freebsd-zfs (1.8T) => 34 3907029101 ada2 GPT (1.8T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834703 2 freebsd-zfs (1.8T) => 34 5860533101 ada3 GPT (2.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 3907029101 ada5 GPT (1.8T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834703 2 freebsd-zfs (1.8T) => 63 15646657 da0 MBR (7.5G) 63 1930257 1 freebsd (942M) 1930320 63 - free - (31k) 1930383 1930257 2 freebsd [active] (942M) 3860640 3024 3 freebsd (1.5M) 3863664 41328 4 freebsd (20M) 3904992 11741728 - free - (5.6G) => 0 1930257 da0s1 BSD (942M) 0 16 - free - (8.0k) 16 1930241 1 !0 (942M) => 0 1930257 da0s2 BSD (942M) 0 16 - free - (8.0k) 16 1930241 1 !0 (942M) => 34 3907029101 ada4 GPT (1.8T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834696 2 freebsd-zfs (1.8T) 3907029128 7 - free - (3.5k)
Whenever I try to online or replace the drive it says that it is active
Code:
Jan 1 15:02:37 freenas manage.py: [middleware.exceptions:38] [MiddlewareError: Disk replacement failed: "invalid vdev specification, use '-f' to override the following errors:, /dev/gptid/1404cdcc-53c8-11e2-8488-a0000004a81b is part of active pool 'share', "]
And even via the Front end the Disk Replacement popup says "Replacing disk None" which makes me think something is out of alignment.
With the original disk in I "failed to unpack label 1" and 2 and the original named disk I was having problems with ada2p2 that now has offline = 1... did I repack them wrong?
Code:
zdb -l /dev/ada0p2 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 15819756277565584026 guid: 15819756277565584026 vdev_children: 4 vdev_tree: type: 'disk' id: 1 guid: 15819756277565584026 path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' phys_path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' whole_disk: 0 metaslab_array: 190 metaslab_shift: 34 ashift: 9 asize: 1998246641664 is_log: 0 DTL: 264 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 15819756277565584026 guid: 15819756277565584026 vdev_children: 4 vdev_tree: type: 'disk' id: 1 guid: 15819756277565584026 path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' phys_path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' whole_disk: 0 metaslab_array: 190 metaslab_shift: 34 ashift: 9 asize: 1998246641664 is_log: 0 DTL: 264 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 15819756277565584026 guid: 15819756277565584026 vdev_children: 4 vdev_tree: type: 'disk' id: 1 guid: 15819756277565584026 path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' phys_path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' whole_disk: 0 metaslab_array: 190 metaslab_shift: 34 ashift: 9 asize: 1998246641664 is_log: 0 DTL: 264 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 15819756277565584026 guid: 15819756277565584026 vdev_children: 4 vdev_tree: type: 'disk' id: 1 guid: 15819756277565584026 path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' phys_path: '/dev/gptid/71c97285-6a90-11e1-81cc-a0000004a81b' whole_disk: 0 metaslab_array: 190 metaslab_shift: 34 ashift: 9 asize: 1998246641664 is_log: 0 DTL: 264 zdb -l /dev/ada4p2 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1878444 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: 'freenas.local' top_guid: 16190259589268306243 guid: 14240238040549989919 vdev_children: 4 vdev_tree: type: 'raidz' id: 0 guid: 16190259589268306243 nparity: 1 metaslab_array: 23 metaslab_shift: 35 ashift: 9 asize: 5994739924992 is_log: 0 children[0]: type: 'disk' id: 0 guid: 14635987785245010869 path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 184 children[1]: type: 'disk' id: 1 guid: 9870541356399624332 path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 183 children[2]: type: 'disk' id: 2 guid: 14240238040549989919 path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' phys_path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' whole_disk: 0 DTL: 125 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1878444 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: 'freenas.local' top_guid: 16190259589268306243 guid: 14240238040549989919 vdev_children: 4 vdev_tree: type: 'raidz' id: 0 guid: 16190259589268306243 nparity: 1 metaslab_array: 23 metaslab_shift: 35 ashift: 9 asize: 5994739924992 is_log: 0 children[0]: type: 'disk' id: 0 guid: 14635987785245010869 path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 184 children[1]: type: 'disk' id: 1 guid: 9870541356399624332 path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 183 children[2]: type: 'disk' id: 2 guid: 14240238040549989919 path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' phys_path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' whole_disk: 0 DTL: 125 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 zdb -l /dev/ada2p2 -------------------------------------------- LABEL 0 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 16190259589268306243 guid: 14635987785245010869 vdev_children: 4 vdev_tree: type: 'raidz' id: 0 guid: 16190259589268306243 nparity: 1 metaslab_array: 23 metaslab_shift: 35 ashift: 9 asize: 5994739924992 is_log: 0 children[0]: type: 'disk' id: 0 guid: 14635987785245010869 path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 184 children[1]: type: 'disk' id: 1 guid: 9870541356399624332 path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 183 children[2]: type: 'disk' id: 2 guid: 14240238040549989919 path: '/dev/dsk/gptid/8d75419f-5287-11e2-8c80-a0000004a81b' phys_path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' whole_disk: 0 DTL: 125 offline: 1 -------------------------------------------- LABEL 1 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 16190259589268306243 guid: 14635987785245010869 vdev_children: 4 vdev_tree: type: 'raidz' id: 0 guid: 16190259589268306243 nparity: 1 metaslab_array: 23 metaslab_shift: 35 ashift: 9 asize: 5994739924992 is_log: 0 children[0]: type: 'disk' id: 0 guid: 14635987785245010869 path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 184 children[1]: type: 'disk' id: 1 guid: 9870541356399624332 path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 183 children[2]: type: 'disk' id: 2 guid: 14240238040549989919 path: '/dev/dsk/gptid/8d75419f-5287-11e2-8c80-a0000004a81b' phys_path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' whole_disk: 0 DTL: 125 offline: 1 -------------------------------------------- LABEL 2 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 16190259589268306243 guid: 14635987785245010869 vdev_children: 4 vdev_tree: type: 'raidz' id: 0 guid: 16190259589268306243 nparity: 1 metaslab_array: 23 metaslab_shift: 35 ashift: 9 asize: 5994739924992 is_log: 0 children[0]: type: 'disk' id: 0 guid: 14635987785245010869 path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 184 children[1]: type: 'disk' id: 1 guid: 9870541356399624332 path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 183 children[2]: type: 'disk' id: 2 guid: 14240238040549989919 path: '/dev/dsk/gptid/8d75419f-5287-11e2-8c80-a0000004a81b' phys_path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' whole_disk: 0 DTL: 125 offline: 1 -------------------------------------------- LABEL 3 -------------------------------------------- version: 28 name: 'share' state: 0 txg: 1900692 pool_guid: 3006137281306932219 hostid: 1143295249 hostname: '' top_guid: 16190259589268306243 guid: 14635987785245010869 vdev_children: 4 vdev_tree: type: 'raidz' id: 0 guid: 16190259589268306243 nparity: 1 metaslab_array: 23 metaslab_shift: 35 ashift: 9 asize: 5994739924992 is_log: 0 children[0]: type: 'disk' id: 0 guid: 14635987785245010869 path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0ad73996-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 184 children[1]: type: 'disk' id: 1 guid: 9870541356399624332 path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' phys_path: '/dev/gptid/0b3a6a4e-a192-11e0-adbe-a0000004a81b' whole_disk: 0 DTL: 183 children[2]: type: 'disk' id: 2 guid: 14240238040549989919 path: '/dev/dsk/gptid/8d75419f-5287-11e2-8c80-a0000004a81b' phys_path: '/dev/gptid/1d7ca002-527f-11e2-890c-a0000004a81b' whole_disk: 0 DTL: 125 offline: 1
Thanks for any assistance, hopefully that is enough information