dataset mounting on /

Status
Not open for further replies.

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Freenas 10.3 STABLE on 45Drives Storage Pod.
Have been using with current configured 4 pools, each with one dataset, since 2014.

A noob administrator unmounted the 'vdev2/dataset2' manually like a unix filesystem: 'umount ..'.

I plinked on it a bit for a fix without rebooting and apparently have demoted myself to noob cause somehow, I've now got the vdev2/dataset2 dataset mounting on top of the OS /. ie., during normal boot everything is fine until the zfs pools are imported, then as soon this vdev is imported, no new processes can proceed and of course no commands are found a /bin/sh shell prompt.

The other 3 pools and their datasets mount without problem if I manage to get the vdev2 manually imported to an altroot different from the default /mnt, tho the import throws a dataset error that it can't create the mountpoint. I can 'zfs set mountpoint=/somesuch vdev2/dataset2' and get to the data.

For starters, how do I get to the best working environment? Is there a boot flag to start zfs service but disable importing existing known and unknown pools?

If I get there, what and how do I set/re-set pool/dataset properties to get things back to where they originally were?

Any suggestions are most welcome.
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
BTW, I this has certainly made me curious about the zfs service start procedure, when/where/how are the pool info cached, and how does that mesh with the FreeBSD OS boot process. Booting to single user doesn't avail the cached files normally found and used by zpool during a full OS boot, right?
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Oh, and the FreeNAS version is 9.10, not 10.3 (the FreeBSD base version)
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Anyone out there have recommendations on booting FreeNAS without auto loading existing pools? I assume I need a full boot, not a single user environment, to reset pool/dataset to default import parameters, No?
Are (/etc/zfs/)cache files used anymore on FreeNAS 9.10?
 
Last edited:
D

dlavigne

Guest
None of what you said makes sense....

What exactly happens when you try to boot the system? Does it hang during boot or complete the boot process? If it completes the boot process, what is the output of zpool status?
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Thanks for the reply. Perhaps it is as difficult to understand as it is to describe.
Here it is again, perhaps more succinctly:
Boot proceeds normally until notice that ZFS pools are beginning to be imported. Boot text proceeds thru successful import of several of the available 4 pools, but then stops after notice announcing the mentioned pool as next import candidate, with 'zpool not found'. ie., /bin, /usr/bin, /sbin, etc., content are no longer available to any shell. Thus I assume the pool and/or its dataset have been mounted on top of the root OS mount point, '/'.
 
D

dlavigne

Guest
Is it able to complete the boot process? If so, please post the output of zpool status.
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Thanks DL. It does not complete the boot.
I now have got a full boot by removing /boot/zfs/zpool.cache and /data/zfs/zpool.cache, then importing each of the other 4 pools that don't cause a problem.
(correction of original postings - there are 5 pools, one of which appears to mount over the '/' OS mount point)

From there I can produce a zpool status for those 4 pools:
Code:
# zpool status
  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

	NAME		STATE	 READ WRITE CKSUM
	freenas-boot  ONLINE	   0	 0	 0
	  ada45p2   ONLINE	   0	 0	 0

errors: No known data errors

  pool: vdev1
state: ONLINE
  scan: scrub repaired 0 in 16h21m with 0 errors on Mon May  1 16:21:07 2017
config:

	NAME											STATE	 READ WRITE CKSUM
	vdev1										   ONLINE	   0	 0	 0
	  raidz2-0									  ONLINE	   0	 0	 0
		ada32p1									 ONLINE	   0	 0	 0
		ada22p1									 ONLINE	   0	 0	 0
		ada7p1									  ONLINE	   0	 0	 0
		ada37p1									 ONLINE	   0	 0	 0
		gptid/ae5a825b-131e-11e7-afc8-002590d5251f  ONLINE	   0	 0	 0
		ada12p1									 ONLINE	   0	 0	 0
	spares
	  gptid/7c043bb6-ff34-b044-927c-930623eeb5f7	AVAIL   

errors: No known data errors

  pool: vdev3
state: ONLINE
  scan: resilvered 33.6M in 2h1m with 0 errors on Thu Apr  6 10:41:29 2017
config:

	NAME										  STATE	 READ WRITE CKSUM
	vdev3										 ONLINE	   0	 0	 0
	  raidz2-0									ONLINE	   0	 0	 0
		ada1p1									ONLINE	   0	 0	 0
		ada31p1								   ONLINE	   0	 0	 0
		ada21p1								   ONLINE	   0	 0	 0
		ada6p1									ONLINE	   0	 0	 0
		ada36p1								   ONLINE	   0	 0	 0
		ada26p1								   ONLINE	   0	 0	 0
		ada11p1								   ONLINE	   0	 0	 0
		ada41p1								   ONLINE	   0	 0	 0
		ada17p1								   ONLINE	   0	 0	 0
		ada2p1									ONLINE	   0	 0	 0
	spares
	  gptid/7c043bb6-ff34-b044-927c-930623eeb5f7  AVAIL   

errors: No known data errors

  pool: vdev4
state: ONLINE
  scan: scrub repaired 0 in 13h24m with 0 errors on Sun May  7 13:24:28 2017
config:

	NAME										  STATE	 READ WRITE CKSUM
	vdev4										 ONLINE	   0	 0	 0
	  raidz2-0									ONLINE	   0	 0	 0
		ada3p1									ONLINE	   0	 0	 0
		ada33p1								   ONLINE	   0	 0	 0
		ada23p1								   ONLINE	   0	 0	 0
		ada8p1									ONLINE	   0	 0	 0
	  ada42p1									 ONLINE	   0	 0	 0
	  ada28p1									 ONLINE	   0	 0	 0
	  ada13p1									 ONLINE	   0	 0	 0
	  ada43p1									 ONLINE	   0	 0	 0
	spares
	  gptid/7c043bb6-ff34-b044-927c-930623eeb5f7  AVAIL   

errors: No known data errors

  pool: vdev5
state: ONLINE
  scan: scrub in progress since Tue May  9 00:00:01 2017
		13.8T scanned out of 27.3T at 404M/s, 9h44m to go
		0 repaired, 50.55% done
config:

	NAME											STATE	 READ WRITE CKSUM
	vdev5										   ONLINE	   0	 0	 0
	  raidz2-0									  ONLINE	   0	 0	 0
		gptid/4422ee7d-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/449060ab-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/44f86c99-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/4563e771-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/45cd99f8-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/463ad068-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/46a20da5-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/470d16e0-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0
		gptid/4774c64b-21a6-11e5-9041-002590d5251f  ONLINE	   0	 0	 0

errors: No known data errors
csrpb# zpool list
NAME		   SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
freenas-boot   232G   664M   231G		 -	  -	 0%  1.00x  ONLINE  -
vdev1		 21.8T  20.5T  1.21T	 16.0E	10%	94%  1.00x  ONLINE  /mnt
vdev3		 36.2T  30.7T  5.59T		 -	16%	84%  1.00x  ONLINE  /mnt
vdev4		   29T  22.5T  6.51T		 -	 6%	77%  1.00x  ONLINE  /mnt
vdev5		 32.5T  27.3T  5.20T		 -	 8%	84%  1.00x  ONLINE  /mnt
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
The offending pool and dataset are: vdev1/csrpb2
How best to import vdev1, forcing the default mountpoints that existed at creation time and similar to the other pools? Example: relevant 'df' output for existing pools:
Code:
vdev1													 361G	679K	361G	 0%	/mnt/vdev1
vdev1/csrpb1											   14T	 14T	361G	97%	/mnt/vdev1/csrpb1
vdev3													 3.4T	329K	3.4T	 0%	/mnt/vdev3
vdev3/csrpb3											   27T	 23T	3.4T	87%	/mnt/vdev3/csrpb3
vdev4													 4.1T	220K	4.1T	 0%	/mnt/vdev4
vdev4/csrpb4											   21T	 17T	4.1T	80%	/mnt/vdev4/csrpb4
vdev5													 3.2T	348K	3.2T	 0%	/mnt/vdev5
vdev5/csrpb5											   24T	 21T	3.2T	87%	/mnt/vdev5/csrpb5
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Pool import path can be forced by using altroot, and FreeNAS should do that when importing pool via the UI. After import you should be able to check/unset mountpoints for all datasets to make them not mount in unexpected places.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Is there a bootflag or something for FreeNAS to have it boot... but not attempt to automount pools?

Its common enough that a corrupt pool causes a kernel panic, that I would expect this should be an option, if its not already?
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Have you been creating pools manually too?!

It looks like your pool 'vdev4' has a raidz2, with additional drives added as a stripe.
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Sorry for the delay in responding.
m0nkey, I actually don't recall creating vdev4 and am curious about that pool, too. Hope it isn't significant to this issue.
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
All, I am wondering whether it is significant that when manually importing vdev2 and regardless of altroot specification, 'df' after import shows it to be at capacity 100%, while the other pools show their vdev with under 100%. Is this related to the dataset error I see when importing:

# zpool import -o altroot=/altmount vdev2
cannot mount '/altmount/vdev2/csrpb2': failed to create mountpoint
# df -h /altmount/vdev2
Filesystem Size Used Avail Capacity Mounted on
vdev2 238K 238K 0B 100% /altmount/vdev2

Can the mountpoint not be created because the vdev is full?

Earlier I had success getting to the 'csrpb2' dataset by setting an alternate mountpoint for the dataset csrpb2, exporting, then importing the vdev again. I removed about 1GB of data with the 'echo "" > <filename>' method, so the dataset shouldn't be full.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
df just does not work properly for ZFS. Use `zfs list` and `zpool list` instead.
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Now this I don't understand. This is a raidz 6 volume of 10 x 4TB Seagate ST4000DM000 (really 3.6TB) drives.
Why do the zfs and zpool listing differ? How can a raidz6 pool be larger than 8 x 4TB?

Code:
zfs list
NAME													 USED  AVAIL  REFER  MOUNTPOINT
vdev2												   27.0T	  0   238K  /altmount/vdev2
vdev2/csrpb2											27.0T	  0  27.0T  /altmount/csrpb2

zpool list
NAME		   SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
vdev2		 36.2T  35.4T   863G		 -	13%	97%  1.00x  ONLINE  /altmount

Code:
csrpb# zpool status vdev2
  pool: vdev2
state: ONLINE
  scan: resilvered 16K in 0h0m with 0 errors on Thu Apr  6 08:40:07 2017
config:

   NAME  STATE  READ WRITE CKSUM
   vdev2  ONLINE  0  0  0
	raidz2-0  ONLINE  0  0  0
	ada15p1  ONLINE  0  0  0
	ada0p1  ONLINE  0  0  0
	ada30p1  ONLINE  0  0  0
	ada20p1  ONLINE  0  0  0
	ada5p1  ONLINE  0  0  0
	ada35p1  ONLINE  0  0  0
	ada25p1  ONLINE  0  0  0
	ada10p1  ONLINE  0  0  0
	ada40p1  ONLINE  0  0  0
	ada16p1  ONLINE  0  0  0
   spares
	gptid/7c043bb6-ff34-b044-927c-930623eeb5f7  AVAIL  

errors: No known data errors

 
Last edited:

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Why do the zfs and zpool listing differ? How can a raidz6 pool be larger than 8 x 4TB?
Because they show different thing. `zfs list` shows space as it is visible for user, taking into account different overheads, quotas, reservations, etc., while `zpool list` shows space as it is visible to pool space allocator, which does not bother about all that stuff and only bother about raw disk space.
 
Status
Not open for further replies.
Top