Zpool import issues after upgrade from 9.2 to 9.3 (fresh install) due to dead USB boot stick

Status
Not open for further replies.

moppidoo

Cadet
Joined
Aug 9, 2014
Messages
6
Hi,
As the title says, I'm having a bit of an issue after the installation when importing the ZFS volumes. after scouring through the forum threads for hours trying to find answers, I have came to some conclusion but also technical hurdle which I cannot figure out how to get over.

I have this one particularly overfilled volume prior to the replacement/upgrade of FreeNAS and now have trouble importing via GUI, the GUI sees the volume and always end up with errors and asked to check the pool status.

So I opened up the shell and do a zpool status and got the below output (truncated):
Code:
pool: WiredSTO_02                                                                                                                
state: ONLINE                                                                                                                     
  scan: scrub repaired 0 in 10h50m with 0 errors on Sat Jan 10 16:20:32 2015                                                       
config:                                                                                                                            
                                                                                                                                   
        NAME                                            STATE     READ WRITE CKSUM                                                 
        WiredSTO_02                                     ONLINE       0     0     0                                                 
          raidz2-0                                      ONLINE       0     0     0                                                 
            gptid/f24bc20e-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
            gptid/f2bfda19-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
            gptid/f33d08f8-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
            gptid/f3beaa89-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
            gptid/f46a91fd-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
            gptid/f5067074-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
            gptid/f59ddfa2-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
            gptid/f6094fa9-441e-11e2-9642-bc5ff448f101  ONLINE       0     0     0                                                 
                                                                                                                                   
errors: No known data errors


so I exported the pool in CLI and attempted a manual mount with
Code:
zpool import -R /mnt WiredSTO_02


and got the following result:
Code:
cannot mount '/mnt/WiredSTO_02/MMWPublic_02': failed to create mountpoint                                                          
cannot mount '/mnt/WiredSTO_02/P_Kollectionz': failed to create mountpoint


using:
Code:
zfs list


returns (truncated):
Code:
NAME USED AVAIL REFER MOUNTPOINT
WiredSTO_02                                                      10.1T      0   427K  /mnt/WiredSTO_02                             
WiredSTO_02/MMWPublic_02                                         4.04T      0  4.04T  /mnt/WiredSTO_02/MMWPublic_02                
WiredSTO_02/P_Kollectionz                                        6.08T      0  6.08T  /mnt/WiredSTO_02/P_Kollectionz



I read people have success deleting/nulling non-important files in the pool before they are able to Auto Import the volume via GUI again in several threads, however, my question is, all my files are in the 2 dataset mountpoints that failed to mount and I don't quite know how to access them so that I can delete the files.

The other potential option is to add extra disk/vdev to the pool which might solve the problem, however what do I need for that to happen, this is a 8x2TB RAIDZ2 array, and I don't quite know how to achieve it with the GUI even assuming if I have the hardware resources to do it as the volume won't show up in the Volume Manager.

Any help would be appreciated!

moppidoo
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So it looks like your dataset metadata is likely corrupted beyond the ability to be mounted. At this point I have no advice except to restore from backup. There's a reason why you get that 80% full warning, then the red 95% full. It's actually dangerous because you can lose the data forever
 
Status
Not open for further replies.
Top