SOLVED How to remove non-existent pool?

Status
Not open for further replies.

RueGorE

Dabbler
Joined
Dec 10, 2018
Messages
18
I can't quite seem to figure this one out. Before I upgraded my FreeNAS box with a bunch of new replacement disks, I replicated my pool to a single large temporary disk (I named it COLDSTORAGE), then replicated the pool to the new disks.

So I have my original pool and all my data in tact, everything is working great on brand new disks. The single large temporary disk (COLDSTORAGE) was then removed never to be seen again.

But here's the rub; when I run zpool status -v I still see COLDSTORAGE listed and it is also appears in the daily report emails. This is the output:

Code:
# zpool status -v
  pool: COLDSTORAGE
state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://illumos.org/msg/ZFS-8000-JQ
  scan: none requested
config:

    NAME                    STATE     READ WRITE CKSUM
    COLDSTORAGE             UNAVAIL      0     0     0
      11612376045504922321  REMOVED      0     0     0  was /dev/gptid/ee182125-f214-11e8-a22e-1831bf506e59

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x0>
        <metadata>:<0x1b>



When I was researching how to do away with this non-existent pool, I came across this post but it didn't seem to help me. I found another post that suggested to use zpool export <pool> and zpool labelclear -f <device> but that too did not help.

I still have that temporary disk on hand, however I've exhausted all my motherboard SATA ports. I suppose I could temporarily remove one of the disks from my RAIDz2 (6x 4TB disks) and put the temporary disk in, then do something to get rid of the temporary COLDSTORAGE pool but I really don't want to bust it open if I can somehow manage to clear it through a cli.

Halp?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
That's really weird, zpool export should've worked. It worked for me in a similar situation just the other day.

My case was slightly odd, though, I didn't get any warnings but there was immense log spam from zfsd, which also eventually crashed the middleware and eventually the system.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
What is the actual output from zpool export COLDSTORAGE?
 

RueGorE

Dabbler
Joined
Dec 10, 2018
Messages
18
What is the actual output from zpool export COLDSTORAGE?

That's the thing -- there is no output. After I type the command and hit ENTER, the cursor drops to the next line and it just sits there. Seems like it is doing something but I let it run for hours and nothing comes of it.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
While it’s running, open a second terminal and check the output of zpool history -i
 

RueGorE

Dabbler
Joined
Dec 10, 2018
Messages
18
Whoa buddy... the output is ridiculously large. I put it up on pastebin: https://pastebin.com/74SMurYN (I hope external links are allowed)
The only relevant line I can see from this output is
Code:
History for 'COLDSTORAGE':
cannot get history for 'COLDSTORAGE': pool I/O is currently suspended
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Try zpool export COLDSTORAGE --discard pool.
 

RueGorE

Dabbler
Joined
Dec 10, 2018
Messages
18
I tried that but still nothing happened. I abandoned this route and gave in to tinkering with the hardware again. Ultimately I cleared the non-existent pool by doing the following:
  1. Since I had no more SATA ports available, I first backed up my configuration and made sure I had a current geli.key for my pool. Then I picked one of the 6 disks and offlined it through the GUI. (Storage > select the pool > Volume Status button > picked a disk > Offline button)
    Note: I have a drive cage with activity lights for each disk. One can easily identify a disk by running dd if=/dev/<disk> of=/dev/null at the terminal to light up the activity light, where <disk> is the disk name according to your system (usually ada# or da#, which can be found in Storage > View Disks), or by serial number if you pre-labeled your disks. This puts the pool in a degraded state, but I felt confident since I still had another disk worth of parity afforded by RAIDZ2.
  2. I swapped in the disk that contained the unwanted pool. I found that I could not import the COLDSTORAGE volume through the GUI so that tells me I must have done something with the data on that disk (probably marked it for destroy data before previous removal), and I confirmed in terminal that the pool was still in the same UNAVAIL state using zpool status -v.
  3. I then issued zpool clear COLDSTORAGE which gave no output (good sign!) then checked zpool status -v again and observed the COLDSTORAGE pool was now gone! Yay!
  4. I swapped the disks again but my system didn't automatically online the disk back in the pool. Back at the Volume Status page, I could see the disk that was still offline but it didn't give me an option to online it, just replace. FreeNAS warned me that using replace would invalidate the pool's key, so I chose not to do that. The View Disks screen only gave options for Edit and Wipe. And I could not use the Import Volume wizard. Back at the terminal, I issued zpool online <pool> <device number> (again, no output) and zpool status -v again.
    Note: The <device number> is found in the output of zpool status, among the other disks that give their gptid.
  5. The disk was back online and resilvering was in progress. It resilvered 313M in 7 seconds with 0 errors.

And all was well again. I appreciate the input and tips! Hopefully this post becomes helpful.
 
Last edited:
Status
Not open for further replies.
Top