Unable to import degraded pool?

Status
Not open for further replies.

Wiltony

Cadet
Joined
Feb 15, 2015
Messages
6
Hi, fairly novice user here. I've had a rock-solid 4-disk RAID 5 system running for quite a few years, but finally had a drive fail on me a few days ago. It brought down the web interface of the system (the only way I can connect to it at the moment), not sure why, so I just swapped out the drive and it came back up. However, the volume status showed as UNKNOWN, and the zpool status command reported "no pools available," so after doing some research, it looked like I needed to detach and re-import the degraded volume, then bring the replacement drive online. My knowledge of the import function is limited, however -- all I can tell you is that the auto-import function from the GUI fails with the error message, "Error: The volume "NAS9TB" failed to import, for further details check pool status." (But again, the zpool status command reports, "no pools available."

From the shell, I went ahead and ran the zpool import command and pasted the results below. Your guidance and suggestions as to my next steps to get this volume imported and back online is much appreciated, thank you!


I'm running FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty)


Code:
[root@tewnas ~]# zpool import												
   pool: NAS9TB																
	id: 3543562285458676801													
  state: DEGRADED															
 status: One or more devices are missing from the system.					
 action: The pool can be imported despite missing or damaged devices.  The	
	   fault tolerance of the pool may be compromised if imported.			
   see: http://www.sun.com/msg/ZFS-8000-2Q									
 config:																		
																			
	   NAS9TB										  DEGRADED				
		 raidz1-0									  DEGRADED				
		   gptid/1cb54f08-d870-11e7-a07a-001ec934bde9  ONLINE				
		   2172384104731514323						 UNAVAIL  cannot open	
		   gptid/1dc4b736-d870-11e7-a07a-001ec934bde9  ONLINE				
		   gptid/1e4c15e3-d870-11e7-a07a-001ec934bde9  ONLINE				
		


Update: I ran import with -f. Here is the result:

Code:
[root@tewnas ~]# zpool import -f NAS9TB										
cannot import 'NAS9TB': I/O error											
	   Recovery is possible, but will result in some data loss.				
	   Returning the pool to its state as of Tue Jun  5 22:25:31 2018		
	   should correct the problem.  Approximately 30 seconds of data		
	   must be discarded, irreversibly.  After rewind, at least				
	   one persistent user-data error will remain.  Recovery can be attempted
	   by executing 'zpool import -F NAS9TB'.  A scrub of the pool			
	   is strongly recommended after recovery.								
[root@tewnas ~]#	


Update 2: I ran import with -fF. It seemed to work. zpool status reports as below. What now?

Code:
state: DEGRADED																
status: One or more devices are faulted in response to IO failures.			
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-JQ									
  scan: scrub repaired 0 in 6h38m with 0 errors on Sun May 20 06:38:22 2018	
config:																		
																			
	   NAME											STATE	 READ WRITE CKS
UM																			
	   NAS9TB										  DEGRADED	 0	 0	
 1																			
		 raidz1-0									  DEGRADED	 0	 0	
 6																			
		   gptid/1cb54f08-d870-11e7-a07a-001ec934bde9  ONLINE	   0	 0	
 0																			
		   2172384104731514323						 UNAVAIL	  0	 0	
 0  was /dev/gptid/1d36da7d-d870-11e7-a07a-001ec934bde9						
		   gptid/1dc4b736-d870-11e7-a07a-001ec934bde9  ONLINE	   0	 0	
 0																			
		   gptid/1e4c15e3-d870-11e7-a07a-001ec934bde9  ONLINE	   0	 0	
 0																			
																			
errors: 1 data errors, use '-v' for a list									
[root@tewnas ~]#			


Update 3: ran zpool clear. Seemed to hang. Waited 10 min, rebooted system. Will report back!

Update 4: seems like I'm at square one. zpool status reports no pools available. Just ran zpool import -f again, results below:

Code:
[root@tewnas ~]# zpool status												
no pools available															
[root@tewnas ~]# zpool import												
   pool: NAS9TB																
	id: 3543562285458676801													
  state: DEGRADED															
 status: One or more devices are missing from the system.					
 action: The pool can be imported despite missing or damaged devices.  The	
	   fault tolerance of the pool may be compromised if imported.			
   see: http://www.sun.com/msg/ZFS-8000-2Q									
 config:																		
																			
	   NAS9TB										  DEGRADED				
		 raidz1-0									  DEGRADED				
		   gptid/1cb54f08-d870-11e7-a07a-001ec934bde9  ONLINE				
		   2172384104731514323						 UNAVAIL  cannot open	
		   gptid/1dc4b736-d870-11e7-a07a-001ec934bde9  ONLINE				
		   gptid/1e4c15e3-d870-11e7-a07a-001ec934bde9  ONLINE				
[root@tewnas ~]# zpool import -f NAS9TB										
cannot mount '/NAS9TB': failed to create mountpoint							
cannot mount '/NAS9TB/NASPlugins': failed to create mountpoint				
cannot mount '/NAS9TB/NASPlugins/Jail': failed to create mountpoint			
cannot mount '/NAS9TB/NASPlugins/Software': failed to create mountpoint		
[root@tewnas ~]#									


My zpool status -v reports:

Code:
errors: Permanent errors have been detected in the following files:			
																			
	   <metadata>:<0x1d>	


Tried zpool clear again and it returned to a cleared shell screen. zpool status shows error is still there. Volume does not show up in my GUI. Should I try again? Other ideas? Any suggestions are appreciated, thank you!

Tried zpool clear -nFX NAS9TB but it does nothing. Just sits there, no output, then the command line resets after a bit.

Tried zpool scrub NAS9TB but it says

Code:
[root@tewnas ~]# zpool scrub NAS9TB											
cannot scrub NAS9TB: pool I/O is currently suspended							
[root@tewnas ~]#



HHHHEEEELLLLPPPPP LOL


Update 5: OOOOKKKK I think I'm missing the part where I replace the old disk in the pool! For some reason I missed that the pool doesn't automatically pick up the new physical drive that was swapped in. Now how do I figure out what the device name of the new drive is....

Update 6: Okay it's ada0. But now my problem is:

Code:
[root@tewnas ~]# zpool replace NAS9TB 2172384104731514323 /dev/ada0			 
cannot replace 2172384104731514323 with /dev/ada0: pool I/O is currently suspend
ed																			 
[root@tewnas ~]# zpool offline NAS9TB 2172384104731514323					   
cannot offline 2172384104731514323: no valid replicas						   
[root@tewnas ~]#	 
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Please stop executing commands more or less blindly and wait for help. You can easily go from a recoverable pool to a total loss of the pool if you run a command you shouldn't run.
 

Wiltony

Cadet
Joined
Feb 15, 2015
Messages
6
I'd love to try to recover but losing hope quickly. I'm suspecting that maybe I had a drive go bad on me and FreeNAS marked it offline but for some reason I didn't get alerted and didn't know I had already been running in a degraded state, and finally drive 2 went out, resulting in this. Does anyone know of a possible way to confirm this? I just need to know whether to cut my losses and move on, or keep working at trying to recover this pool.
 
Status
Not open for further replies.
Top