Long time user of Freenas and loving it, but last week I stumbled on a problem I simply cannot get around.
I've got a zpool consisting of ten raidz-2 with six 3TB drives in each so its a quite large 126.8TB zpool I rather not loose :)
I've been through floodings and electric shocks during the years I've expanded the pool, but lately all old Seagate drives (+40.000 hours) started generating hell of a lot of S.M.A.R.T. errors, so I stared replacing those with WD RED drives which I love..
Now, after replacing more than 50% of the 60 drives in the pool, one raidz-2 gave me trouble. It was the one that I replaced more or less all my Seagate drives in. When selecting replace a Seagate drive, it just added the new WD Red but didnt release the Seagate. So now this pool look like it consists of eight devices instead of six. The thing is I have one spot where the unavailable drive is impossible to detach.. I've tried several times to detach it in the Freenas GUI and the process generates no errors, but when looking at the volume status the device is still there.. Then I of course have one drive thats reporting thats it being replaced, but nothing happens..
Any ideas how to detach the drives that should not be in the pool anymore?!
-----------------------------------------------------------------------------------------------
Build FreeNAS-9.10.2-U5 (561f0d7a1)
Platform Intel(R) Xeon(R) CPU E31270 @ 3.40GHz
Memory 32716MB
System Time Fri Aug 04 09:36:03 CEST 2017
Uptime 9:36AM up 3 days, 22:24, 1 user
Load Average 0.03, 0.17, 0.16
------------------------------------------------------------------------------------------------------
Raidz2-4
da58p2 ONLINE
15310076402626339188 UNAVAILABLE
da21p2 ONLINE
da51p2 ONLINE
da57p2 ONLINE
da27p2 ONLINE
da20p2 ONLINE
da32p2 ONLINE
raidz2-3 ONLINE 0 0 0
gptid/4f61e235-c525-11e6-8853-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/46fe1c4b-5da9-11e7-ab56-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/bb534030-dc7a-11e6-b35d-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/3951d766-8dbc-11e4-addb-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/8402a40c-d5e9-11e4-addb-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/83ce4b2c-b681-11e3-80f2-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
raidz2-4 DEGRADED 0 0 2
gptid/39b6747c-bd24-11e6-8853-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/2c1a4ece-4afe-11e7-8fcd-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
replacing-2 ONLINE 0 0 0
gptid/2deee1c6-565a-11e2-a391-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/da80b257-50dd-11e7-8fcd-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/836c3a45-c15a-11e6-8853-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/1f700822-4b9d-11e7-8fcd-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
replacing-5 DEGRADED 0 0 0
15310076402626339188 UNAVAIL 0 0 0 was /dev/gptid/375c4838-565a-11e2-a391-002590576f25
gptid/47396353-5aaf-11e7-be00-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
raidz2-5 ONLINE 0 0 0
gptid/79e6ffbf-c358-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/2af2d9d6-bbc0-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/6e101e8f-3284-11e3-a461-002590576f25 ONLINE 0 0 0
gptid/6ef0985b-3284-11e3-a461-002590576f25 ONLINE 0 0 0
gptid/91e9785c-c068-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/70ba9262-3284-11e3-a461-002590576f25 ONLINE 0 0 0
raidz2-6 ONLINE 0 0 0
gptid/d1d39e3a-d6a1-11e6-8137-002590576f25 ONLINE 0 0 0
gptid/e2e7b41b-82c1-11e3-901a-002590576f25 ONLINE 0 0 0
gptid/e3bd7211-82c1-11e3-901a-002590576f25 ONLINE 0 0 0
gptid/e29946ab-417c-11e4-b169-002590576f25 ONLINE 0 0 0
gptid/e8aeebc6-bdd2-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/1c84d069-ce87-11e6-8853-002590576f25 ONLINE 0 0 0
----------------------------------------------------------------------------------------------------------------
/Regards Peter K
I've got a zpool consisting of ten raidz-2 with six 3TB drives in each so its a quite large 126.8TB zpool I rather not loose :)
I've been through floodings and electric shocks during the years I've expanded the pool, but lately all old Seagate drives (+40.000 hours) started generating hell of a lot of S.M.A.R.T. errors, so I stared replacing those with WD RED drives which I love..
Now, after replacing more than 50% of the 60 drives in the pool, one raidz-2 gave me trouble. It was the one that I replaced more or less all my Seagate drives in. When selecting replace a Seagate drive, it just added the new WD Red but didnt release the Seagate. So now this pool look like it consists of eight devices instead of six. The thing is I have one spot where the unavailable drive is impossible to detach.. I've tried several times to detach it in the Freenas GUI and the process generates no errors, but when looking at the volume status the device is still there.. Then I of course have one drive thats reporting thats it being replaced, but nothing happens..
Any ideas how to detach the drives that should not be in the pool anymore?!
-----------------------------------------------------------------------------------------------
Build FreeNAS-9.10.2-U5 (561f0d7a1)
Platform Intel(R) Xeon(R) CPU E31270 @ 3.40GHz
Memory 32716MB
System Time Fri Aug 04 09:36:03 CEST 2017
Uptime 9:36AM up 3 days, 22:24, 1 user
Load Average 0.03, 0.17, 0.16
------------------------------------------------------------------------------------------------------
Raidz2-4
da58p2 ONLINE
15310076402626339188 UNAVAILABLE
da21p2 ONLINE
da51p2 ONLINE
da57p2 ONLINE
da27p2 ONLINE
da20p2 ONLINE
da32p2 ONLINE
raidz2-3 ONLINE 0 0 0
gptid/4f61e235-c525-11e6-8853-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/46fe1c4b-5da9-11e7-ab56-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/bb534030-dc7a-11e6-b35d-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/3951d766-8dbc-11e4-addb-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/8402a40c-d5e9-11e4-addb-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/83ce4b2c-b681-11e3-80f2-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
raidz2-4 DEGRADED 0 0 2
gptid/39b6747c-bd24-11e6-8853-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/2c1a4ece-4afe-11e7-8fcd-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
replacing-2 ONLINE 0 0 0
gptid/2deee1c6-565a-11e2-a391-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/da80b257-50dd-11e7-8fcd-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/836c3a45-c15a-11e6-8853-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
gptid/1f700822-4b9d-11e7-8fcd-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
replacing-5 DEGRADED 0 0 0
15310076402626339188 UNAVAIL 0 0 0 was /dev/gptid/375c4838-565a-11e2-a391-002590576f25
gptid/47396353-5aaf-11e7-be00-002590576f25 ONLINE 0 0 0 block size: 512B configured, 4096B native
raidz2-5 ONLINE 0 0 0
gptid/79e6ffbf-c358-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/2af2d9d6-bbc0-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/6e101e8f-3284-11e3-a461-002590576f25 ONLINE 0 0 0
gptid/6ef0985b-3284-11e3-a461-002590576f25 ONLINE 0 0 0
gptid/91e9785c-c068-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/70ba9262-3284-11e3-a461-002590576f25 ONLINE 0 0 0
raidz2-6 ONLINE 0 0 0
gptid/d1d39e3a-d6a1-11e6-8137-002590576f25 ONLINE 0 0 0
gptid/e2e7b41b-82c1-11e3-901a-002590576f25 ONLINE 0 0 0
gptid/e3bd7211-82c1-11e3-901a-002590576f25 ONLINE 0 0 0
gptid/e29946ab-417c-11e4-b169-002590576f25 ONLINE 0 0 0
gptid/e8aeebc6-bdd2-11e6-8853-002590576f25 ONLINE 0 0 0
gptid/1c84d069-ce87-11e6-8853-002590576f25 ONLINE 0 0 0
----------------------------------------------------------------------------------------------------------------
/Regards Peter K