Unable to replace failed drive - [MiddlewareError: freebsd-zfs partition could not be found]

Status
Not open for further replies.

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Hi,

I'm attempting to replace a failed disk on my 4-disk NAS running FreeNAS 11-MASTER-201706300526 (4ef764a).

However, when I click the UNAVAIL drive, hit Replace, and select my replacement drive (with Force, as it's not empty), I get the error:

2J8AKup.png


The full traceback is:
Code:
Environment:

Software Version: FreeNAS-11-MASTER-201706300526 (4ef764a)
Request Method: POST
Request URL: http://192.168.5.38/storage/zpool-datastore/disk/replace/5473229154515832404/


Traceback:
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
  42.			 response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _legacy_get_response
  249.			 response = self._get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
  178.			 response = middleware_method(request, callback, callback_args, callback_kwargs)
File "./freenasUI/freeadmin/middleware.py" in process_view
  162.		 return login_required(view_func)(request, *view_args, **view_kwargs)
File "/usr/local/lib/python3.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
  23.				 return view_func(request, *args, **kwargs)
File "./freenasUI/storage/views.py" in zpool_disk_replace
  893.			 if form.done():
File "./freenasUI/storage/forms.py" in done
  2067.			 passphrase=passfile
File "./freenasUI/middleware/notifier.py" in zfs_replace_disk
  1043.			 raise MiddlewareError('freebsd-zfs partition could not be found')

Exception Type: MiddlewareError at /storage/zpool-datastore/disk/replace/5473229154515832404/
Exception Value: [MiddlewareError: freebsd-zfs partition could not be found]


Any thoughts on what's going on?

Thanks,
Victor
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Sure, `zpool status` is here:
Code:
  pool: datastore
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
	the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 33h45m with 0 errors on Mon Jul  3 09:45:29 2017
config:

	NAME											STATE	 READ WRITE CKSUM
	datastore									   DEGRADED	 0	 0	 0
	  raidz1-0									  DEGRADED	 0	 0	 0
		gptid/1b019b58-5db5-11e6-92fe-10604b92dc14  ONLINE	   0	 0	 0
		5473229154515832404						 UNAVAIL	  0	 0	 0  was /dev/gptid/1bd01f4e-5db5-11e6-92fe-10604b92dc14
		gptid/1b586c61-5db5-11e6-92fe-10604b92dc14  ONLINE	   0	 0	 0
		gptid/1c918aec-5db5-11e6-92fe-10604b92dc14  ONLINE	   0	 0	 0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

	NAME		STATE	 READ WRITE CKSUM
	freenas-boot  ONLINE	   0	 0	 0
	  da0p2	 ONLINE	   0	 0	 0

errors: No known data errors
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
INteresting.

Has the "new" disk actually been used for something else in the past? If so, try wiping out all traces of any partitions from before.
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Just on a hunch - I upgraded my FreeNAS to the latest 11 nightly (build from 2017-07-23) - and it seemed to proceed this time, without the crash.

It's currently resilvering - fingers crossed. It's taking a long time (these are Seagate SMR drives - I assume related?) - around 80-ish hours for 8TB.

So could this just a bug that was fixed very recently?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is there a reason you're running nightlies rather than the stable train? Wanting stability and running nightlies isn't a very good combination.
 
Status
Not open for further replies.
Top