I removed an ssd l2arc drive from my encrypted pool and was still unable to reuse it on the nas server (dd and such were failing to write to the drive). I ended up physically removing the drive from the server to wipe it for use in another computer.
The problem is that now when the nas is rebooted and the pool mounts it looks like it is still trying to use the ssd that is no longer present. I do not see the ssd anymore in zpool status though, it also didn't show up after I issued the zpool remove command before i turned off the server.
The command i used since the web interface wouldn't let me:
zpool remove core gptid/54e44e7e-2031-11e4-9708-00248182cc52.eli
On boot i see this in the logs:
Sep 15 18:14:25 freenas manage.py: [middleware.notifier:1271] Failed to geli attach gptid/54e44e7e-2031-11e4-9708-00248182cc52: geli: Cannot open gptid/54e44e7e-2031-11e4-9708-00248182cc52: No such file or directory.
Other than that entry in the log the server is working fine. Is there a conf file somewhere that is still referencing the now missing drive?
The last scrub was canceled because I didn't want to wait the 19hours for it to finish so i could remove the drive. Scrubs have ran weekly for the last few months without any errors so i'm not to worried about missing a week.
FreeNAS-9.2.1.5-RELEASE-x64 (80c1d35)
The problem is that now when the nas is rebooted and the pool mounts it looks like it is still trying to use the ssd that is no longer present. I do not see the ssd anymore in zpool status though, it also didn't show up after I issued the zpool remove command before i turned off the server.
The command i used since the web interface wouldn't let me:
zpool remove core gptid/54e44e7e-2031-11e4-9708-00248182cc52.eli
On boot i see this in the logs:
Sep 15 18:14:25 freenas manage.py: [middleware.notifier:1271] Failed to geli attach gptid/54e44e7e-2031-11e4-9708-00248182cc52: geli: Cannot open gptid/54e44e7e-2031-11e4-9708-00248182cc52: No such file or directory.
Other than that entry in the log the server is working fine. Is there a conf file somewhere that is still referencing the now missing drive?
The last scrub was canceled because I didn't want to wait the 19hours for it to finish so i could remove the drive. Scrubs have ran weekly for the last few months without any errors so i'm not to worried about missing a week.
FreeNAS-9.2.1.5-RELEASE-x64 (80c1d35)
Code:
[root@freenas] /# zpool status pool: core state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub canceled on Sun Sep 14 21:57:25 2014 config: NAME STATE READ WRITE CKSUM core ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/c3b616c5-6396-11e3-88d5-6805ca0e7899.eli ONLINE 0 0 0 gptid/c43807a7-6396-11e3-88d5-6805ca0e7899.eli ONLINE 0 0 0 gptid/c4cd861c-6396-11e3-88d5-6805ca0e7899.eli ONLINE 0 0 0 gptid/7a39da5a-9728-11e3-8c4d-6805ca0e7899.eli ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/df8c3d92-dc85-11e3-b5d2-00248182cc52.eli ONLINE 0 0 0 gptid/dffa53a8-dc85-11e3-b5d2-00248182cc52.eli ONLINE 0 0 0 gptid/e07ef30e-dc85-11e3-b5d2-00248182cc52.eli ONLINE 0 0 0 gptid/e10fbebf-dc85-11e3-b5d2-00248182cc52.eli ONLINE 0 0 0 errors: No known data errors