Trouble with importing zfs volume after update to 9.10

Status
Not open for further replies.

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
I hope I'm not creating an unnecessary thread but the search was only moderately useful. Also my FreeNAS experience is somewhat limited.

I replaced my old intel board with an X8SIL-F today, thus I updated my FreeNAS 8.3.1 to 9.10.2.

Importing a smaller pool was no trouble at all and worked flawlessly, until I tried to import my 6 x 3TB Z2 pool. I should mention that this pool was kind of filled to the brim and there was maybe 250-300GB of space left on it.

At first it went smoothly and the pool appeared under Storage > Volumes, though I was unable to set permissions. When I tried to create a SMB share I noticed that the pool appeared under /mnt/RAIDZ2-6x3TB/ , but where normally two folder should be (/mnt/RAIDZ2-6x3TB/Storage01 and /mnt/RAIDZ2-6x3TB/Storage02) there was just a single file named Storage. Needless to say the SMB share was not sucessful.

Next I noticed the red warning light that suggested to upgrade my ZFS pools, which I did.

I tried to detach the ZFS pool to import the ZFS pool again, though with little success. When I selected the pool to be imported, nothing happened. I checked if the disks still appeared via view disks and they were still there. I tried to import the pool via CLI by its ID next which didn't work, though it gave me the error "failed to create mount point" for Storage01 and Storage02. That error at least gave me something to feed the search with. I tried "zpool import -o readonly=on -R /mnt RAIDZ2-6x3TB" and the likes but it didn't work.

What perplexes me now is that it now reports
"cannot import 'RAIDZ2-6x3TB': a pool with that name is already created/imported,
and no additional pools with that name were found"

Can anyone point me in the right direction what can be done to fix this ?

I appreciate any help you can provide.
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
Code:
[root@freenas ~]# zpool import RAIDZ2-6x3TB									 
cannot import 'RAIDZ2-6x3TB': a pool with that name is already created/imported,
and no additional pools with that name were found							   
[root@freenas ~]#	 




Code:
[root@freenas ~]# zpool status																									 
  pool: PersDATA																													
state: ONLINE																													 
  scan: scrub repaired 0 in 1h16m with 0 errors on Sun Dec 18 01:16:40 2016														 
config:																															 
																																	
		NAME										  STATE	 READ WRITE CKSUM													
		PersDATA									  ONLINE	   0	 0	 0													
		  gptid/3f8f5ab2-e84c-11e5-a86a-7071bcb0da5b  ONLINE	   0	 0	 0													
																																	
errors: No known data errors																										
																																	
  pool: RAIDZ2-6x3TB																												
state: ONLINE																													 
  scan: scrub repaired 0 in 18h28m with 0 errors on Sun Dec 18 18:28:05 2016														
config:																															 
																																	
		NAME											STATE	 READ WRITE CKSUM												 
		RAIDZ2-6x3TB									ONLINE	   0	 0	 0												 
		  raidz2-0									  ONLINE	   0	 0	 0												 
			gptid/98a64d23-561c-11e3-ba32-7071bcb0da5b  ONLINE	   0	 0	 0												 
			gptid/99761f0a-561c-11e3-ba32-7071bcb0da5b  ONLINE	   0	 0	 0												 
			gptid/9a4d9162-561c-11e3-ba32-7071bcb0da5b  ONLINE	   0	 0	 0												 
			gptid/9b1e862f-561c-11e3-ba32-7071bcb0da5b  ONLINE	   0	 0	 0												 
			gptid/9bef56dd-561c-11e3-ba32-7071bcb0da5b  ONLINE	   0	 0	 0												 
			gptid/9cc3627c-561c-11e3-ba32-7071bcb0da5b  ONLINE	   0	 0	 0												 
																																	
errors: No known data errors																										
																																	
  pool: freenas-boot																												
state: ONLINE																													 
  scan: none requested																											 
config:																															 
																																	
		NAME		STATE	 READ WRITE CKSUM																					 
		freenas-boot  ONLINE	   0	 0	 0																					
		  da0p2	 ONLINE	   0	 0	 0																					 
																																	
errors: No known data errors																										
[root@freenas ~]#
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
First of all, please slow down. You are going to get into trouble and loose your data if you are not careful. Second, you shouldn't have upgraded your pool. You need to understand what you are doing before you leap. Now you cannot go back to any previous version of FreeNAS, you are stuck with 9.10.2.

Provide the output of "zfs list" please.

EDIT: Thanks for using code tags!
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
Yeah you're right, I shouldn't have acted so rashly.

Code:
[root@freenas ~]# zpool list																										
NAME		   SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT												 
PersDATA	   460G   315G   145G		 -	 4%	68%  1.00x  ONLINE  /mnt													
RAIDZ2-6x3TB  16.2T  15.9T   360G		 -	 0%	97%  1.00x  ONLINE  -														
freenas-boot  14.9G   646M  14.2G		 -	  -	 4%  1.00x  ONLINE  -														
[root@freenas ~]#	



Code:
													 
[root@freenas ~]# zfs list																										
NAME														USED  AVAIL  REFER  MOUNTPOINT										
PersDATA													315G   131G   128K  /mnt/PersDATA									  
PersDATA/.system										   25.2M   131G	96K  legacy											
PersDATA/.system/configs-eab18b758b91471d95803a91d80bfcda	88K   131G	88K  legacy											
PersDATA/.system/cores									 18.1M   131G  18.1M  legacy											
PersDATA/.system/rrd-eab18b758b91471d95803a91d80bfcda		88K   131G	88K  legacy											
PersDATA/.system/samba4									4.36M   131G  4.36M  legacy											
PersDATA/.system/syslog-eab18b758b91471d95803a91d80bfcda   2.53M   131G  2.53M  legacy											
PersDATA/Deliverence										315G   131G   315G  /mnt/PersDATA/Deliverence						  
PersDATA/jails											   88K   131G	88K  /mnt/PersDATA/jails								
RAIDZ2-6x3TB											   10.6T	  0   296K  /RAIDZ2-6x3TB									  
RAIDZ2-6x3TB/Storage01									 10.5T	  0  10.5T  /RAIDZ2-6x3TB/Storage01							
RAIDZ2-6x3TB/Storage02									 64.2G	  0  64.2G  /RAIDZ2-6x3TB/Storage02							
freenas-boot												646M  13.8G	64K  none												
freenas-boot/ROOT										   639M  13.8G	29K  none												
freenas-boot/ROOT/Initial-Install							 1K  13.8G   635M  legacy											
freenas-boot/ROOT/default								   639M  13.8G   636M  legacy											
freenas-boot/grub										  6.34M  13.8G  6.34M  legacy											
[root@freenas ~]#	
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I see your data but you have "0" available space.

You should be able to setup a share without issue.

In an SSH window type "cd /RAIDZ2-6x3TB/Storage01"
Next type "ls" and you should see all your files, 10.5TB worth.

As you already know, your pool is running very slow and this will continue until you drop down to have more than 10% free space. I highly recommend that you clean up your files.
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
I logged in via SSH.

Code:
FreeBSD 10.3-STABLE (FreeNAS.amd64) #0 r295946+47645f1(9.10.2-STABLE): Mon Dec 19 08:30:01 UTC 2016

		FreeNAS (c) 2009-2016, The FreeNAS Development Team
		All rights reserved.
		FreeNAS is released under the modified BSD license.

		For more information, documentation, help or support, go here:
		http://freenas.org
Welcome to FreeNAS
[root@freenas] ~# 
[root@freenas] ~# cd /RAIDZ2-6x3TB/
[root@freenas] /RAIDZ2-6x3TB# ls
./	   ../	  Storage
[root@freenas] /RAIDZ2-6x3TB# ls -la
total 27
drwxr-xr-x   2 writeuser  user_group	 3 Dec 30 17:36 ./
drwxr-xr-x  21 root	   wheel		 28 Dec 30 20:03 ../
-rw-rw-r--   1 root	   user_group  1801 May 18  2014 Storage
[root@freenas] /RAIDZ2-6x3TB#



As you can see there is only the Storage file. I opened it with nano and found a folder list as a text file of what would have been inside the Storage01 folder.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Yea but "Storage" isn't a listed folder. So what's under "Storage" ?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Arg, didn't notice "Storage" wasn't a directory.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The only thing I can think of trying is to attempt to roll back to 9.10.1-U4 and hope you can import your pool but I thought the zpool updated features were present in version 9.10.2. It's worth a shot.

If that doesn't work then someone else will need to offer a helping hand. I don't know if you will be able to get your data back.

And I guess my last bit of Q&A is: Are the hard drives thrashing around? I'm curious if they are just really busy trying to mount your drives properly being there is no free space on them and that too could be causing an issue.

You could also look into booting up a Live CD of an OS (like FreBSD 10) which recognized ZFS and try to mount the pool. Ubuntu also recognizes ZFS. I'd shoot for FreeBSD 10 first. Of course if there is no important data on the pool then you might consider destroying it.
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
Thanks for the idea. I'll try to make a fresh install with 9.10.1-U4 and see what happens then. I'll keep this thread updated about the results.

How can I find out if my drives are thrashing around ? By the sound they make, or is there a command I can use ?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
If the GUI is working you can look in the Report section and Disk I/O. If it's thrashing around, maybe give it a day or so to see if it settles out.

I'm sure there is some command, I just don't know it.
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
image.png
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Not much going on with the hard drives.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I'd be curious to know if there are any interesting error messages in /var/log/messages.
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
Made a fresh install with 9.10.1-U4 but it didn't help. Same output as before.

Strangely enough, this morning when I tried to import the pool again to see if there are and changes in the log messages, the RAIDZ2-6x3TB pool wasn't even there anymore. The disks are still visible though.

Here's the content of /var/log/messages
 

Attachments

  • messages.txt
    58.9 KB · Views: 221

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It would appear the datasets aren't mounted. A combination of zfs list -t all -r and df -k should
get a better idea if the datasets are mounted.

ZFS adds in some dataset attributes that many file systems don't include. For example, the actual mount point.
But, a ZFS listing may not mean it's actually mounted. Thus, using the OS command to see what is mounted.
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
Code:
[root@freenas ~]# zfs list -t all -r																								
NAME														USED  AVAIL  REFER  MOUNTPOINT										 
PersDATA													315G   131G   112K  /mnt/PersDATA									   
PersDATA/.system										   26.0M   131G	96K  legacy											 
PersDATA/.system/configs-810048d7feed436fae88f4409435135f	96K   131G	96K  legacy											 
PersDATA/.system/configs-eab18b758b91471d95803a91d80bfcda	88K   131G	88K  legacy											 
PersDATA/.system/cores									 18.0M   131G  18.0M  legacy											 
PersDATA/.system/rrd-810048d7feed436fae88f4409435135f		96K   131G	96K  legacy											 
PersDATA/.system/rrd-eab18b758b91471d95803a91d80bfcda		88K   131G	88K  legacy											 
PersDATA/.system/samba4									4.36M   131G  4.36M  legacy											 
PersDATA/.system/syslog-810048d7feed436fae88f4409435135f	400K   131G   400K  legacy											 
PersDATA/.system/syslog-eab18b758b91471d95803a91d80bfcda   2.79M   131G  2.79M  legacy											 
PersDATA/Deliverence										315G   131G   315G  /mnt/PersDATA/Deliverence						   
PersDATA/jails											   88K   131G	88K  /mnt/PersDATA/jails								 
PersDATA/jails_2											 96K   131G	96K  /mnt/PersDATA/jails_2							   
freenas-boot												632M  13.8G	31K  none												
freenas-boot/ROOT										   625M  13.8G	25K  none												
freenas-boot/ROOT/Initial-Install							 1K  13.8G   622M  legacy											 
freenas-boot/ROOT/default								   625M  13.8G   622M  legacy											 
freenas-boot/ROOT/default@2016-12-30-13:12:09			  2.80M	  -   622M  -												   
freenas-boot/grub										  6.33M  13.8G  6.33M  legacy											 
[root@freenas ~]#	 



Code:
[root@freenas ~]# df -k																											 
Filesystem												1024-blocks	  Used	 Avail Capacity  Mounted on					 
freenas-boot/ROOT/default									15100409	637309  14463100	 4%	/							   
devfs															   1		 1		 0   100%	/dev							
tmpfs														   32768	  8424	 24344	26%	/etc							
tmpfs															4096		 8	  4088	 0%	/mnt							
tmpfs														 5579636	 94816   5484820	 2%	/var							
freenas-boot/grub											14469580	  6480  14463100	 0%	/boot/grub					 
fdescfs															 1		 1		 0   100%	/dev/fd						 
PersDATA													137078564	   112 137078452	 0%	/mnt/PersDATA				   
PersDATA/Deliverence										467242544 330164092 137078452	71%	/mnt/PersDATA/Deliverence	   
PersDATA/jails											  137078540		88 137078452	 0%	/mnt/PersDATA/jails			 
PersDATA/.system											137078548		96 137078452	 0%	/var/db/system				 
PersDATA/.system/cores									  137096852	 18400 137078452	 0%	/var/db/system/cores			
PersDATA/.system/samba4									 137082920	  4468 137078452	 0%	/var/db/system/samba4		   
PersDATA/.system/syslog-810048d7feed436fae88f4409435135f	137078856	   404 137078452	 0%	/var/db/system/syslog-810048d7fe
ed436fae88f4409435135f																											 
PersDATA/.system/rrd-810048d7feed436fae88f4409435135f	   137078548		96 137078452	 0%	/var/db/system/rrd-810048d7feed4
36fae88f4409435135f																												 
PersDATA/.system/configs-810048d7feed436fae88f4409435135f   137078548		96 137078452	 0%	/var/db/system/configs-810048d7f
eed436fae88f4409435135f																											 
linprocfs														   4		 4		 0   100%	/compat/linux/proc			 
PersDATA/jails_2											137078548		96 137078452	 0%	/mnt/PersDATA/jails_2		   
[root@freenas ~]#
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I think you need to try a Live CD. Also, have you tried to go back to 8.3? I'm certian that will not work since you said you updated your pool but it's worth a shot.
 

Kolkrabi

Dabbler
Joined
Sep 15, 2013
Messages
11
Is a live CD simply the FreeNAS .iso burned to a CD, or are you refering to something else ?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
A Live CD would be FreeNAS 10 Live CD or Ubuntu 16.x Live CD. These can mount ZFS vdevs. You will need to do a search on how to use these and mount your pool, assuming it can be mounted. If you are able to mount your pool then you can copy off your important data. If you are a Windows person then this will likely take you a bit of time to work through.

Well I need to get running, need to go bitch at the Safelite AutoGlass people. The guy did a crap job.
 
Status
Not open for further replies.
Top