Interpret zpool status

Status
Not open for further replies.

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
The question is: Is the l2arc drive removed or not? and how come its automatically removed?

Code:
 pool: tank
 state: ONLINE
status: One or more devices has been removed by the administrator.
		Sufficient replicas exist for the pool to continue functioning in a
		degraded state.
action: Online the device using 'zpool online' or replace the device with
		'zpool replace'.
  scan: none requested
config:

		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/615363da-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/623c2d72-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/631434e6-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/63fe194e-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/64f2846e-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/65cf0470-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
		cache
		  16126124850395889202						  REMOVED	  0	 0	 0  was /dev/gptid/c287982a-681f-11e7-aa4c-000743114150

errors: No known data errors


Code:
tank									21.8T   114G  21.6T		 -	 0%	 0%  1.00x  ONLINE  /mnt
  raidz2								21.8T   114G  21.6T		 -	 0%	 0%
	gptid/615363da-6810-11e7-aa4c-000743114150	  -	  -	  -		 -	  -	  -
	gptid/623c2d72-6810-11e7-aa4c-000743114150	  -	  -	  -		 -	  -	  -
	gptid/631434e6-6810-11e7-aa4c-000743114150	  -	  -	  -		 -	  -	  -
	gptid/63fe194e-6810-11e7-aa4c-000743114150	  -	  -	  -		 -	  -	  -
	gptid/64f2846e-6810-11e7-aa4c-000743114150	  -	  -	  -		 -	  -	  -
	gptid/65cf0470-6810-11e7-aa4c-000743114150	  -	  -	  -		 -	  -	  -
cache									   -	  -	  -		 -	  -	  -
  16126124850395889202				   373G	  0   373G		 -	 0%	 0%


Code:
2017-07-14.01:05:58 zpool add -f tank cache /dev/gptid/c287982a-681f-11e7-aa4c-000743114150

root@freenas:~ # 
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Same thng happends now I add a l2arc and freenas keeps removing it, but it doesnt show up in the history!
I add a 1.8" dc3700 400gb as cache in freenas gui, wait for it to complete, check zpool history and it's been addad, a short while later, a min or two and its gone!

Can I degub this somehow? is it a problem with sata connectors and cables perhaps? a glitch?
I will try and make a normal zvol of the ssd and monitor it.


Code:
2017-07-14.01:05:58 zpool add -f tank cache /dev/gptid/c287982a-681f-11e7-aa4c-000743114150
2017-07-15.14:03:46 zpool add -f tank cache /dev/gptid/95562d24-6955-11e7-aa4c-000743114150

root@freenas:~ # zpool status tank
  pool: tank
 state: ONLINE
status: One or more devices has been removed by the administrator.
		Sufficient replicas exist for the pool to continue functioning in a
		degraded state.
action: Online the device using 'zpool online' or replace the device with
		'zpool replace'.
  scan: none requested
config:

		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/615363da-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/623c2d72-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/631434e6-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/63fe194e-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/64f2846e-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
			gptid/65cf0470-6810-11e7-aa4c-000743114150  ONLINE	   0	 0	 0
		cache
		  16126124850395889202						  REMOVED	  0	 0	 0  was /dev/gptid/c287982a-681f-11e7-aa4c-000743114150
		  15992806694137838262						  REMOVED	  0	 0	 0  was /dev/gptid/95562d24-6955-11e7-aa4c-000743114150

errors: No known data errors

 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
If ZFS keeps pushing out the device, then there is something wrong with it. ZFS will push a device out of a pool if it's detected errors it cannot recover from.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
odd thing there stilllots of activity to the disc, like its still there and functioning as a l2arc.
Look at this gstat outout

Code:
dT: 1.008s  w: 1.000s
L(q)  ops/s	r/s   kBps   ms/r	w/s   kBps   ms/w   %busy Name
	0	120	 72   9208	5.2	 44	528	0.1   28.5| ada0
	0	121	 72   9208	4.9	 45	532	0.1   28.5| ada1
	0	218	171  21774	5.4	 44	532	0.1   44.1| ada2
	0	  0	  0	  0	0.0	  0	  0	0.0	0.0| ada0p1
	0	145	 98  12574   10.1	 43	425	2.6   41.2| ada3
	0	  0	  0	  0	0.0	  0	  0	0.0	0.0| ada4
	0	  0	  0	  0	0.0	  0	  0	0.0	0.0| ada5
	0	146	 98  12574   10.6	 44	429	1.7   45.7| ada6
	0	120	 72   9208	5.2	 44	528	0.1   28.5| ada0p2
	0	218	171  21782	5.2	 44	421	0.7   41.8| ada7
	0	120	 72   9208	5.2	 44	528	0.1   28.5| gptid/615363da-6810-11e7-aa4c-000743114150



and iostat
Code:
root@freenas:~ # zpool iostat -v
										   capacity	 operations	bandwidth
pool									alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Film2								   1.44T   382G	  0	  0	 14	 15
  gptid/89bc0040-ed69-11e5-8183-000c2975a9ee  1.44T   382G	  0	  0	 14	 15
--------------------------------------  -----  -----  -----  -----  -----  -----
Storage								 1.36T  1.36T	  0	  3  31.0K   156K
  gptid/5d300082-eeb2-11e5-ae77-000c2975a9ee  1.36T  1.36T	  0	  3  31.0K   156K
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot							1.45G  12.9G	  0	  0  4.14K  5.57K
  da0p2								 1.45G  12.9G	  0	  0  4.14K  5.57K
--------------------------------------  -----  -----  -----  -----  -----  -----
tank									3.69T  18.1T	 11	 92  5.57M  15.6M
  raidz2								3.69T  18.1T	 11	 92  5.57M  15.6M
	gptid/615363da-6810-11e7-aa4c-000743114150	  -	  -	  7	 42   955K  3.96M
	gptid/623c2d72-6810-11e7-aa4c-000743114150	  -	  -	  7	 42   950K  3.96M
	gptid/631434e6-6810-11e7-aa4c-000743114150	  -	  -	  7	 42   942K  3.96M
	gptid/63fe194e-6810-11e7-aa4c-000743114150	  -	  -	  7	 42   943K  3.96M
	gptid/64f2846e-6810-11e7-aa4c-000743114150	  -	  -	  7	 43   953K  3.96M
	gptid/65cf0470-6810-11e7-aa4c-000743114150	  -	  -	  7	 42   957K  3.96M
cache									   -	  -	  -	  -	  -	  -
  16126124850395889202					  0   373G	  0	  0	  0	 55
  15992806694137838262				  6.26G   366G	  0	  0	780   213K
--------------------------------------  -----  -----  -----  -----  -----  -----

 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
zpool explicitly says that it's been removed, but there is no such log in history and there is still activity to the disc it self!

"status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state."
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Okay! I made a mirror out of the two intel dc3700 1.8 ssds to get some more info and yes there are errors.

root@freenas:~ # zpool status -v ssdpool
pool: ssdpool
state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: http://illumos.org/msg/ZFS-8000-JQ
scan: scrub repaired 0 in 0h0m with 0 errors on Mon Jul 17 12:36:31 2017
config:

NAME STATE READ WRITE CKSUM
ssdpool UNAVAIL 0 1.73K 0
mirror-0 UNAVAIL 0 204 0
15156771224732220850 REMOVED 0 0 0 was /dev/gptid/f68a5827-6acf-11e7-88bd-000743114150
17737423659024182551 REMOVED 0 0 0 was /dev/gptid/f6c834d3-6acf-11e7-88bd-000743114150

errors: Permanent errors have been detected in the following files:

ssdpool:<0x0>
ssdpool:<0x65d>
ssdpool:<0x666>
ssdpool:<0x66c>
ssdpool:<0x66f>
ssdpool:<0x672>
ssdpool:<0x675>
ssdpool:<0x678>
ssdpool:<0x67b>
ssdpool:<0x67e>


so zfs are removing the drives due to errors both of them! thats is unfortunate,Now the question is is it the adaptors/cables/connectors or the drives them self.

so I'll have to buy a new adaptor and test, but what else can I do ?
 
Status
Not open for further replies.
Top