Unable to import pool

Status
Not open for further replies.

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
Hi,
I suffered an ASRock Rack CDIxxx Atom failure as has been seen frequently. I upgraded/replaced this with a similar Supermicro board and installed a fresh 9.10.2 instance. I then attempted to import my pool from previous iteration. The import in the WebGUI fails with message to check status.
From CLI with zpool status shows the pool is present, online, and without errors. Output below:
Code:
pool: hs2-1																   
state: ONLINE																 
  scan: scrub repaired 0 in 0h20m with 0 errors on Sat Jan  7 16:17:00 2017	 
config:																		 
																				
		NAME											STATE	 READ WRITE CKS
UM																			 
		hs2-1										   ONLINE	   0	 0	
0																			 
		  mirror-0									  ONLINE	   0	 0	
0																			 
			gptid/f5c71912-b888-11e5-9e53-525400aaccee  ONLINE	   0	 0	
0																			 
			gptid/f67318b1-b888-11e5-9e53-525400aaccee  ONLINE	   0	 0	
0																			 
																				
errors: No known data errors   
/CODE]
I rebooted to no avail.  I searched previous threads and saw a post by cyberjock so attempted to export and then reimport via gui.   Still no joy.
Any input greatly appreciated!
Bob
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
I have really gotten even more confused. I am able to fully access and transfer out data from the shell via going to /mnt/hs2-1 and then getting to the individual directories. The only thing that does not appear to work is visualizing or being able to manipulate the volume from the webgui. Maybe I am misunderstanding how zfs works but shouldn't the gui be able to see this if I can access data from cli?
Bob
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
In order for the GUI to see the pool, it needs to be imported from the GUI.
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
Understood. My question is why the gui will not import but shows error stating unable to import check status yet status obviously is online despite the fact that I did not import via CLI in this instance.
B
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Well, wouldn't the appropriate thing be to tail the logs when the error message occurs?
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
OK here is is

Code:
Jan  8 06:32:59 freenas ZFS: vdev state changed, pool_guid=13331514992486076256
vdev_guid=4239070409737983228												
Jan  8 06:32:59 freenas ZFS: vdev state changed, pool_guid=13331514992486076256
vdev_guid=1250868852907949488												
Jan  8 06:33:00 freenas manage.py: [middleware.notifier:3547] Importing hs2-1 [1
3331514992486076256] failed with: cannot mount '/mnt/hs2-1/office-backups/jails/
owncloud_1': failed to create mountpoint										
Jan  8 06:33:00 freenas manage.py: [middleware.exceptions:37] [MiddlewareError:
The volume "hs2-1" failed to import, for futher details check pool status]	
[root@freenas ~]#															

it looks like the own cloud plugin is being problematic but I still don't understand why the directory in question is accessible via /mnt/hs2-1/owncloud..... via the CLI.

The jail was created under 9.3 so I get the templates changed- is the workaround simply deleting the own cloud directory so python quits choking on import?

Bob
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
So when you replaced your system did you restore your config, or are you trying to start from scratch and just import the data?
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
Code:
		  mirror-0  ONLINE	   0	 0	 0								
			da1p2   ONLINE	   0	 0	 0								
			da0p2   ONLINE	   0	 0	 0								
																				
errors: No known data errors													
																				
  pool: hs2-1																
state: ONLINE																
  scan: scrub repaired 0 in 0h20m with 0 errors on Sat Jan  7 16:17:00 2017	
config:																		
																				
		NAME											STATE	 READ WRITE CKS
UM																			
		hs2-1										   ONLINE	   0	 0	
0																			
		  mirror-0									  ONLINE	   0	 0	
0																			
			gptid/f5c71912-b888-11e5-9e53-525400aaccee  ONLINE	   0	 0	
0																			
			gptid/f67318b1-b888-11e5-9e53-525400aaccee  ONLINE	   0	 0	
0																			
																				
errors: No known data errors													
[root@freenas ~]#															


the top entry that was cutoff is free-nas boot which is ok

Bob
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
Started from scratch and attempting to import existent pool
Thanks
B
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Well it's weird that it's choking on a jail dataset. If I were you I would delete that dataset and try to import again.
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
Agreed it does not seem to make sense hence my utter confusion.
To delete the dataset do I simply delete the directory at /mnt/hs2-1/owncloud.... or is there a zpool command I need to. use in this case- ie. like bfs destroy command since the dataset appears mounted but invisible outside of CLI?
Thanks
B
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
Weird. zfs destroy flunks as well
Code:
pool: hs2-1																  
state: ONLINE																
  scan: scrub repaired 0 in 0h20m with 0 errors on Sat Jan  7 16:17:00 2017	
config:																		
																				
		NAME											STATE	 READ WRITE CKS
UM																			
		hs2-1										   ONLINE	   0	 0	
0																			
		  mirror-0									  ONLINE	   0	 0	
0																			
			gptid/f5c71912-b888-11e5-9e53-525400aaccee  ONLINE	   0	 0	
0																			
			gptid/f67318b1-b888-11e5-9e53-525400aaccee  ONLINE	   0	 0	
0																			
																				
errors: No known data errors													
[root@freenas /mnt/hs2-1/jails]# zfs destroy /mnt/hs2-1/jails				  
cannot open '/mnt/hs2-1/jails': invalid dataset name							
[root@freenas /mnt/hs2-1/jails]# zfs destroy /mnt/hs2-1/jails/owncloud_1		
cannot open '/mnt/hs2-1/jails/owncloud_1': invalid dataset name				
[root@freenas /mnt/hs2-1/jails]#


Bob
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
I'm not sure where to go from here. Can you add a -f to force it? I'm not really familiar with zfs destroy command.
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
using -f was not successful. My google-fu thus far has not revealed the answer but I will continue looking. Any other advice appreciated. I may have to simply copy data off from the command line and then completely wipe and reconfigure this pool unless anyone else can point me in the right direction or google comes through.
B
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
The zfs destroy command needs the name of the dataset relative to the pool, not the mountpoint.
 

KODB

Dabbler
Joined
Dec 13, 2015
Messages
12
ok, able to destroy offending dataset from old jail but still unable to import. I am moving data off the accessible mount points and will start pool from scratch with clean install of 9.10.2.
Thanks all for the help!
B
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
It expects a fiilesystem name on my system:
Code:
root@T20:~# zfs destroy -n /pool0/windows
cannot open '/pool0/windows': invalid dataset name
root@T20:~# zfs destroy -n pool0/windows
cannot destroy 'pool0/windows': filesystem has children
 
Status
Not open for further replies.
Top