Jack Greene
Cadet
- Joined
- Apr 21, 2016
- Messages
- 2
Ok, I had a 2TB HDD fail in my 8 drive volume. Swapped the failed drive with a new drive, the system recognized it. Then I went to the GUI and selected View Disks but the new drive didn't appear there. So then I made the error of selecting the ZFS Volume Manager and clicked "Add Extra Device" for my volume. No sooner had I clicked the button then I started having second thoughts. I didn't want to expand my volume, I wanted simply to replace the failed HDD.
[root@nl-admin] ~# zpool status
pool: Volume1
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 1h12m with 0 errors on Sun Mar 27 01:12:40 2016
config:
NAME STATE READ WRITE CKSUM
Volume1 DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
gptid/c83defce-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
821753282234454170 REMOVED 0 0 0 was /dev/gptid/c8b1c630-0f56-11e3-bcda-002590c8d19c
gptid/c9275660-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/c9a06e1a-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/ca1f64b8-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/c63b3fdd-8915-11e3-9b6f-002590c8d19c ONLINE 0 0 0
gptid/cb2e6965-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/cbb30f90-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/67263eb9-0720-11e6-a56c-002590c8d19c ONLINE 0 0 0
errors: No known data errors
So now my volume thinks it has 9 members, when all I wanted to do is replace the failed disk. History on the Volume shows :
zpool add -f Volume1 /dev/gptid/67263eb9-0720-11e6-a56c-002590c8d19c
UGH - I see 'split' isn't an option. What about detach? Either way, everything I'm reading seems to indicate that the only way to resize Volume1 back to it's size before the error, is to destroy it and recreate.
I'm writing for your experience, has any one had any luck running detach at this raid level?
[root@nl-admin] ~# zpool status
pool: Volume1
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 1h12m with 0 errors on Sun Mar 27 01:12:40 2016
config:
NAME STATE READ WRITE CKSUM
Volume1 DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
gptid/c83defce-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
821753282234454170 REMOVED 0 0 0 was /dev/gptid/c8b1c630-0f56-11e3-bcda-002590c8d19c
gptid/c9275660-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/c9a06e1a-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/ca1f64b8-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/c63b3fdd-8915-11e3-9b6f-002590c8d19c ONLINE 0 0 0
gptid/cb2e6965-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/cbb30f90-0f56-11e3-bcda-002590c8d19c ONLINE 0 0 0
gptid/67263eb9-0720-11e6-a56c-002590c8d19c ONLINE 0 0 0
errors: No known data errors
So now my volume thinks it has 9 members, when all I wanted to do is replace the failed disk. History on the Volume shows :
zpool add -f Volume1 /dev/gptid/67263eb9-0720-11e6-a56c-002590c8d19c
UGH - I see 'split' isn't an option. What about detach? Either way, everything I'm reading seems to indicate that the only way to resize Volume1 back to it's size before the error, is to destroy it and recreate.
I'm writing for your experience, has any one had any luck running detach at this raid level?