SOLVED Zpool imported but files missing

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
I was trying to upgrade my 9.10 version to 11. The Guide shows that I could do it by specifying a new boot environment which to me is a separate USB drive so that I can hang on to the old 9.10 boot drive if anything goes south. However the installer didn't show up this option and I realized the installer was trying to upgrade it onto the existing 9.10 boot drive. I stopped the PC but the 9.10 boot drive gets a panic whenever it is booted.

To salvage the situation I now re-installed 9.10 again on a new USB drive.

My existing zpool was then imported using the GUI's Volume->import volume. The storage shows up the zpool correctly with 14TiB as before but in CLI the mount point is empty.

I checked the size of the root filing system using command "du -hs /" and found out it is only 18GB large in total.

Have I imported the volume successfully?

Would appreciate any pointer to get access to the files inside the zpool again.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Have I imported the volume successfully?
Open a secure shell and type in the command zpool status
If it reads out something similar to the example below, then yes it has...

Code:
[root@freenas] ~# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Fri Jun  9 03:45:37 2017
config:

		NAME		STATE	 READ WRITE CKSUM
		freenas-boot  ONLINE	   0	 0	 0
		  ada0p2	ONLINE	   0	 0	 0

errors: No known data errors

  pool: plumber
 state: ONLINE
  scan: scrub repaired 0 in 2h2m with 0 errors on Sun Jun 18 03:03:40 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		plumber										 ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/9562a750-c341-11e6-a20a-90e2ba0045fc  ONLINE	   0	 0	 0
			gptid/c0a640f6-c3df-11e6-9356-90e2ba0045fc  ONLINE	   0	 0	 0
			gptid/18ed8eb6-f573-11e6-96fa-90e2ba0045fc  ONLINE	   0	 0	 0
			gptid/91cb5f7c-32a1-11e6-a5d4-90e2ba0045fc  ONLINE	   0	 0	 0
			gptid/1dc335b1-f551-11e6-acda-90e2ba0045fc  ONLINE	   0	 0	 0
			gptid/f7acbdd0-f5ab-11e6-9ee6-90e2ba0045fc  ONLINE	   0	 0	 0

errors: No known data errors
[root@freenas] ~#
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
Done that already. No error. Just no file

Code:
[root@freenas ~]# zpool status																									
  pool: freenas-boot																												
state: ONLINE																													
  scan: none requested																											
config:																															
																																	
		NAME		STATE	 READ WRITE CKSUM																					
		freenas-boot  ONLINE	   0	 0	 0																					
		  da0p2	 ONLINE	   0	 0	 0																					
																																	
errors: No known data errors																										
																																	
  pool: i7-3770K																													
state: ONLINE																													
  scan: scrub repaired 0 in 8h26m with 0 errors on Sun Jun  4 00:27:00 2017													  
config:																															
																																	
		NAME											STATE	 READ WRITE CKSUM												
		i7-3770K										ONLINE	   0	 0	 0												
		  mirror-0									  ONLINE	   0	 0	 0												
			gptid/0866c0a3-e89d-11e6-84a6-3085a995a374  ONLINE	   0	 0	 0												
			gptid/092c3eef-e89d-11e6-84a6-3085a995a374  ONLINE	   0	 0	 0												
		  mirror-1									  ONLINE	   0	 0	 0												
			gptid/09f3c8b7-e89d-11e6-84a6-3085a995a374  ONLINE	   0	 0	 0												
			gptid/0ab8969a-e89d-11e6-84a6-3085a995a374  ONLINE	   0	 0	 0												
		  mirror-2									  ONLINE	   0	 0	 0												
			gptid/0b8834e8-e89d-11e6-84a6-3085a995a374  ONLINE	   0	 0	 0												
			gptid/0ce30578-e89d-11e6-84a6-3085a995a374  ONLINE	   0	 0	 0												
																																	
errors: No known data errors																										
[root@freenas ~]#


upload_2017-6-23_19-42-45.png


Code:
[root@freenas ~]# ls /mnt/i7-3770K/											 
Summerhill	  jails_2		 jails_4										 
jails		   jails_3		 saikee										 
[root@freenas ~]# ls /mnt/i7-3770K/Summerhill/								 
[root@freenas ~]# ls /mnt/i7-3770K/saikee/									 
.cshrc		  .login_conf	 .mailrc		 .rhosts		 .windows		
.login		  .mail_aliases   .profile		.shrc						   
[root@freenas ~]#	

mounting point is empty!
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
Volume Manager indicates my Zpool has 14.0 TiB data.

The size of my whole filing system is only 18GB when I check it in CLI.

It appears the actual files/data have not been mounted.
Code:
[root@freenas ~]# du -hs /																										 
18G	/																														   
[root@freenas ~]#
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
Does anybody have any idea to investigate further before I give up the 14TiB data on my zpool?

I have tried to boot the PC with several Linux and Mint in oder to mount the zpool but all failed reporting
Code:
ata14.00: status : {DRDY}


Thus Linux doesn't like my Zpool which is a mirror 2x3x8TB set.

So far with the original FreeNAS 9.10 reinstalled I could import the volume in GUI and see the system reported without any error and the zpool has still 66% used with 14TiB. In CLI the mount point is empty and has no file except the jails folder. In fact the entire filing system is only 18GB large.

Is there is any other way to access my zpool?
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
The above command shows 14 TiB in the zpool i7-3770K but in CLI no file could be detected and the mount point has only 17GB data. Apart from the jails both Summerhill and saikee folders are empty.

Code:
[root@Summerhill ~]# zfs list -r i7-3770K																						  
NAME														USED  AVAIL  REFER  MOUNTPOINT										
i7-3770K												   14.0T  7.11T  13.9T  /mnt/i7-3770K									  
i7-3770K/.system											435M  7.11T   402M  legacy											
i7-3770K/.system/configs-66311c036e824820af44b2dbf4c55f10	88K  7.11T	88K  legacy											
i7-3770K/.system/configs-ea02119b0df4495ba64ec1dbdd61ed06  12.9M  7.11T  12.9M  legacy											
i7-3770K/.system/cores									  844K  7.11T   844K  legacy											
i7-3770K/.system/rrd-66311c036e824820af44b2dbf4c55f10	  10.7M  7.11T  10.7M  legacy											
i7-3770K/.system/rrd-ea02119b0df4495ba64ec1dbdd61ed06		88K  7.11T	88K  legacy											
i7-3770K/.system/samba4									 392K  7.11T   392K  legacy											
i7-3770K/.system/syslog-66311c036e824820af44b2dbf4c55f10	648K  7.11T   648K  legacy											
i7-3770K/.system/syslog-ea02119b0df4495ba64ec1dbdd61ed06   7.17M  7.11T  7.17M  legacy											
i7-3770K/Summerhill										  88K  7.11T	88K  /mnt/i7-3770K/Summerhill							
i7-3770K/jails											 13.5G  7.11T   120K  /mnt/i7-3770K/jails								
i7-3770K/jails/.warden-template-pluginjail				  472M  7.11T   469M  /mnt/i7-3770K/jails/.warden-template-pluginjail	
i7-3770K/jails/owncloud_1								   498M  7.11T   963M  /mnt/i7-3770K/jails/owncloud_1					
i7-3770K/jails/plexmediaserver_1						   12.6G  7.11T  13.1G  /mnt/i7-3770K/jails/plexmediaserver_1			  
i7-3770K/jails_2											834M  7.11T   116K  /mnt/i7-3770K/jails_2							  
i7-3770K/jails_2/.warden-template-pluginjail				541M  7.11T   539M  /mnt/i7-3770K/jails_2/.warden-template-pluginjail  
i7-3770K/jails_2/plexmediaserver_1						  293M  7.11T   829M  /mnt/i7-3770K/jails_2/plexmediaserver_1			
i7-3770K/jails_3											834M  7.11T   112K  /mnt/i7-3770K/jails_3							  
i7-3770K/jails_3/.warden-template-pluginjail				541M  7.11T   539M  /mnt/i7-3770K/jails_3/.warden-template-pluginjail  
i7-3770K/jails_3/plexmediaserver_1						  293M  7.11T   829M  /mnt/i7-3770K/jails_3/plexmediaserver_1			
i7-3770K/jails_4											 88K  7.11T	88K  /mnt/i7-3770K/jails_4							  
i7-3770K/jails_5											 88K  7.11T	88K  /mnt/i7-3770K/jails_5							  
i7-3770K/saikee											 124K  7.11T   124K  /mnt/i7-3770K/saikee								
[root@Summerhill ~]# du -hs /mnt																									
17G	/mnt																														
[root@Summerhill ~]# du -hs /mnt/i7-3770K/																						
17G	/mnt/i7-3770K/
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
I am new to the zpool but the information in CLI suggests the zpool is health and 64% allocated with 14TB data but I just couldn't find any of my files.

Code:
[root@Summerhill ~]# zpool get all i7-3770K																						
NAME	  PROPERTY					   VALUE						  SOURCE													
i7-3770K  size						   21.8T						  -														 
i7-3770K  capacity					   64%							-														 
i7-3770K  altroot						/mnt						   local													 
i7-3770K  health						 ONLINE						 -														 
i7-3770K  guid						   10230194165920616685		   default													
i7-3770K  version						-							  default													
i7-3770K  bootfs						 -							  default													
i7-3770K  delegation					 on							 default													
i7-3770K  autoreplace					off							default													
i7-3770K  cachefile					  /data/zfs/zpool.cache		  local													 
i7-3770K  failmode					   continue					   local													 
i7-3770K  listsnapshots				  off							default													
i7-3770K  autoexpand					 on							 local													 
i7-3770K  dedupditto					 0							  default													
i7-3770K  dedupratio					 1.00x						  -														 
i7-3770K  free						   7.79T						  -														 
i7-3770K  allocated					  14.0T						  -														 
i7-3770K  readonly					   off							-														 
i7-3770K  comment						-							  default													
i7-3770K  expandsize					 -							  -														 
i7-3770K  freeing						0							  default													
i7-3770K  fragmentation				  11%							-														 
i7-3770K  leaked						 0							  default													
i7-3770K  feature@async_destroy		  enabled						local													 
i7-3770K  feature@empty_bpobj			active						 local													 
i7-3770K  feature@lz4_compress		   active						 local													 
i7-3770K  feature@multi_vdev_crash_dump  enabled						local													 
i7-3770K  feature@spacemap_histogram	 active						 local													 
i7-3770K  feature@enabled_txg			active						 local													 
i7-3770K  feature@hole_birth			 active						 local													 
i7-3770K  feature@extensible_dataset	 enabled						local													 
i7-3770K  feature@embedded_data		  active						 local													 
i7-3770K  feature@bookmarks			  enabled						local													 
i7-3770K  feature@filesystem_limits	  enabled						local													 
i7-3770K  feature@large_blocks		   enabled						local	 
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
As a point of interest I run also another FreeNAS on a smaller storage. Its Zpool New_films shows up 11TB large at the mount point. Since my data is only 5.73TB I suppose the files size were double counted because of /mnt includes the jails.

My question is have I lost my i7-3770K even if it has been imported without error?

The only difference between my two zpools (on two different computers) is I damaged the boot drive in i7-377K PC and had to reinstall FreeNAS 9.10 again. The i7-3770K Zpool was working faultlessly for over 90 days and has not been touched except being imported into the re-installed FreeNAS. The New_films zpool is a pilot scheme for trying out FreeNAS with just stripe udev.

Up to now my experience with zpool reliability has been flawless. I still think zfs filing system is best way for NAS but I am coming to the steep section of my learning curve.

Code:
[root@freenas ~]# zpool get all  New_films																						 
NAME	   PROPERTY					   VALUE						  SOURCE													 
New_films  size						   7.25T						  -														 
New_films  capacity					   79%							-														 
New_films  altroot						/mnt						   local													 
New_films  health						 ONLINE						 -														 
New_films  guid						   13445812282486344209		   default													
New_films  version						-							  default													
New_films  bootfs						 -							  default													
New_films  delegation					 on							 default													
New_films  autoreplace					off							default													
New_films  cachefile					  /data/zfs/zpool.cache		  local													 
New_films  failmode					   continue					   local													 
New_films  listsnapshots				  off							default													
New_films  autoexpand					 on							 local													 
New_films  dedupditto					 0							  default													
New_films  dedupratio					 1.00x						  -														 
New_films  free						   1.52T						  -														 
New_films  allocated					  5.73T						  -														 
New_films  readonly					   off							-														 
New_films  comment						-							  default													
New_films  expandsize					 -							  -														 
New_films  freeing						0							  default													
New_films  fragmentation				  43%							-														 
New_films  leaked						 0							  default													
New_films  feature@async_destroy		  enabled						local													 
New_films  feature@empty_bpobj			active						 local													 
New_films  feature@lz4_compress		   active						 local													 
New_films  feature@multi_vdev_crash_dump  enabled						local													 
New_films  feature@spacemap_histogram	 active						 local													 
New_films  feature@enabled_txg			active						 local													 
New_films  feature@hole_birth			 active						 local													 
New_films  feature@extensible_dataset	 enabled						local													 
New_films  feature@embedded_data		  active						 local													 
New_films  feature@bookmarks			  enabled						local													 
New_films  feature@filesystem_limits	  enabled						local													 
New_films  feature@large_blocks		   enabled						local													 
[root@freenas ~]# du -hs /mnt																									   
11T	/mnt																														
[root@freenas ~]#
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
What's the output of zfs list -t snapshot | grep i7-3770K/Summerhill? Also, what happens if you try zfs unmount i7-3770K/Summerhill followed by ls /mnt/i7-3770K/Summerhill?
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
I tried twice the suggestion.
Code:
[root@Summerhill ~]# zfs list -t snapshot | grep i7-3770K/Summerhill																
[root@Summerhill ~]# zfs umount i7-3770K/Summerhill																				
[root@Summerhill ~]# ls /mnt/i7-3770K/Summerhill/																				  
[root@Summerhill ~]#																												
[root@Summerhill ~]#																												
[root@Summerhill ~]# zfs list -t snapshot | grep i7-3770K/Summerhill																
[root@Summerhill ~]# zfs umount i7-377K/Summerhill																				
cannot open 'i7-377K/Summerhill': dataset does not exist																			
[root@Summerhill ~]# ls /mnt/i7-3770K/Summerhill/


There seem to be empty in the snapshot. The subdirectory /mnt/i7-3770K/Summerhill is always always empty so mount or umount does nothing to it.

However after the output from the 2nd umount would suggest something has been un-mounted by the 1st umount command.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
OK. I was checking for two things there--one being space used by snapshots (there are no snapshots in that dataset, so no space being used by them). The other has happened occasionally, there will be contents in a directory, and subsequently a dataset created with the same name as that directory. The dataset is then (automatically) mounted at that same location, hiding the contents of the directory. But that's not what's going on here either, apparently.

Go ahead and re-mount Summerhill: zfs mount i7-3770K/Summerhill. Let's see if there are any snapshots at all: zfs list -t snapshot.
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
OK. There seems to be something there but I don't know how to get into it.
Code:
[root@Summerhill ~]# zfs mount i7-3770K/Summerhill																				
[root@Summerhill ~]# zfs list -t snapshot																						  
NAME												 USED  AVAIL  REFER  MOUNTPOINT												
freenas-boot/ROOT/default@2017-06-24-15:23:39	   3.11M	  -   635M  -														
i7-3770K@manual-20170625							 176K	  -  13.9T  -														
i7-3770K/jails/.warden-template-pluginjail@clean	2.97M	  -   469M  -														
i7-3770K/jails_2/.warden-template-pluginjail@clean  2.35M	  -   539M  -														
i7-3770K/jails_3/.warden-template-pluginjail@clean  2.35M	  -   539M  -														
[root@Summerhill ~]#


Using another drive in the same PC I also installed FreeBSD V11.0 but its zfs commands reported no zpool and no datasets found.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
Yeah, there's something there, but (if I'm reading the snapshot size correctly, which is far from certain) there isn't much delta at all since that snapshot was taken. What happens with ls -alh /mnt/i7-3770K/?
 

saikee

Explorer
Joined
Feb 7, 2017
Messages
77
I am afraid the snapshot was taken after I couldn't find the files so it may have little value.
Code:
[root@Summerhill ~]# ls -alh /mnt/i7-3770K/																						
total 96																															
drwxrwxr-x  9 root	wheel	  9B Jun 27 02:09 .																				
drwxr-xr-x  3 root	wheel	128B Jun 27 05:17 ..																				
drwxrwxr-x  2 root	wheel	  2B Jun 22 15:29 Summerhill																		
drwxr-xr-x  9 root	wheel	  9B Jun 24 17:12 jails																			
drwxr-xr-x  7 root	wheel	  8B Jun 24 17:12 jails_2																			
drwxr-xr-x  7 root	wheel	  7B Jun 24 17:12 jails_3																			
drwxr-xr-x  2 root	wheel	  2B Jun 23 04:19 jails_4																			
drwxr-xr-x  2 root	wheel	  2B Jun 27 02:09 jails_5																			
drwxrwxr-x+ 2 saikee  saikee	11B Jun 23 04:08 saikee																			
[root@Summerhill ~]#

It looks increasingly my 14TB data has gone. I like to know what mistake I have made apart from losing the boot drive.

Is there any utility to interrogate the contents of a zpool?
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
I think I'm stumped here. I don't think your data is gone--something's using that space on your pool. But it seems to be hidden for some reason, and I think I'm out of ideas why. @Ericloewe? @rs225?

Edit: Actually, I have one other idea--try reporting a bug using the Support button in your web GUI. That will attach a debug file, which might help the devs figure it out.
 
Last edited:

styno

Patron
Joined
Apr 11, 2016
Messages
466
You can try browsing the snapshots directly.
ex. ls -alh /mnt/i7-3770K/.zfs/snapshot/manual-20170625/Summerhill
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
I haven't been keeping up with this one, have we checked for folder/dataset collisions? Both "data is in a folder but the dataset is mounted" and vice-versa?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
have we checked for folder/dataset collisions? Both "data is in a folder but the dataset is mounted"
This one, at least, is addressed above. And since the dataset was mounted previously, I think the second possibility was also addressed.
 
Top