ZFS pool status UNKNOWN and unable to import

Status
Not open for further replies.

minihui

Cadet
Joined
Jul 30, 2018
Messages
2
Hello everybody,

I have a zpool that is unknown and unable to import:

Code:
cannont import 'PERC_RAID': I/O error
Destroy and re-create the pool from
a bakcup source.


Situation: I have been using 3x Dell R510 H700 with 12x 4TB sata drives for backup purpose only for years, never had any problem until now. The new system is a Dell R720xd with H710p and 24x 4TB sata drives, RAID60 in single volume. The first stipe had some errors and is in Partially Degraded state. Ever since from there, the FreeNAS will throw me an error saying the zpool is with error but can be cleared everytime it boots. So I did clear the error everytime until yesterday I forgot to clear the error, did some read/write then shut it down. Now the zpool won't import.

So my question is is there any fix to this? It's very unlikely the logical drive has hardware bad blocks since it's still protected by H710p RAID60. So I guess its data is somehow messed.

And here comes another question: some of my data was migrated from another server which snapshots are set to keep only 2w long, means the old snapshots are deleted automatically. Is there any possiblity to bring those back?

Thank you all in advance.

--------

Update 0: I am currently running FreeNAS 11.1-U5

Update 1: if I do zpool import -F PERC_RAID, it shows:

Code:
cannot import 'PERC_RAID': one or more devices is currently unavailable


Update 2: if I do zdb -l /dev/da0p2 (which is the device where zfs volume is located), it shows:

Code:
--------------------------------------------

LABEL 0

--------------------------------------------

	version: 5000

	name: 'PERC_RAID'

	state: 0

	txg: 487298

	pool_guid: 12288433215471904550

	hostid: 3438666489

	hostname: ''

	top_guid: 13006281068906994673

	guid: 13006281068906994673

	vdev_children: 1

	vdev_tree:

		type: 'disk'

		id: 0

		guid: 13006281068906994673

		path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2'

		whole_disk: 1

		metaslab_array: 38

		metaslab_shift: 39

		ashift: 12

		asize: 88002801172480

		is_log: 0

		create_txg: 4

	features_for_read:

		com.delphix:hole_birth

		com.delphix:embedded_data

--------------------------------------------

LABEL 1

--------------------------------------------

	version: 5000

	name: 'PERC_RAID'

	state: 0

	txg: 487298

	pool_guid: 12288433215471904550

	hostid: 3438666489

	hostname: ''

	top_guid: 13006281068906994673

	guid: 13006281068906994673

	vdev_children: 1

	vdev_tree:

		type: 'disk'

		id: 0

		guid: 13006281068906994673

		path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2'

		whole_disk: 1

		metaslab_array: 38

		metaslab_shift: 39

		ashift: 12

		asize: 88002801172480

		is_log: 0

		create_txg: 4

	features_for_read:

		com.delphix:hole_birth

		com.delphix:embedded_data

--------------------------------------------

LABEL 2

--------------------------------------------

	version: 5000

	name: 'PERC_RAID'

	state: 0

	txg: 487298

	pool_guid: 12288433215471904550

	hostid: 3438666489

	hostname: ''

	top_guid: 13006281068906994673

	guid: 13006281068906994673

	vdev_children: 1

	vdev_tree:

		type: 'disk'

		id: 0

		guid: 13006281068906994673

		path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2'

		whole_disk: 1

		metaslab_array: 38

		metaslab_shift: 39

		ashift: 12

		asize: 88002801172480

		is_log: 0

		create_txg: 4

	features_for_read:

		com.delphix:hole_birth

		com.delphix:embedded_data

--------------------------------------------

LABEL 3

--------------------------------------------

	version: 5000

	name: 'PERC_RAID'

	state: 0

	txg: 487298

	pool_guid: 12288433215471904550

	hostid: 3438666489

	hostname: ''

	top_guid: 13006281068906994673

	guid: 13006281068906994673

	vdev_children: 1

	vdev_tree:

		type: 'disk'

		id: 0

		guid: 13006281068906994673

		path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2'

		whole_disk: 1

		metaslab_array: 38

		metaslab_shift: 39

		ashift: 12

		asize: 88002801172480

		is_log: 0

		create_txg: 4

	features_for_read:

		com.delphix:hole_birth

		com.delphix:embedded_data
 
Last edited:

minihui

Cadet
Joined
Jul 30, 2018
Messages
2
Best of luck,

Kids.. don’t do hardware raid and ZFS..

Yeah.. I know i know I got it the hard way now. I didn't lose too much precious data on this one. So I'd wait for a few days for any solution. If not I will just put this one onto HBA then migrate all my storage on HBAs...
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The only things I can think of, are;
  1. Verify your hardware RAID LUN(s) are still good, via the BIOS or other tool, (like bootable CD/DVD with software for management). Fix if needed.
  2. Gracefully power everything down, and boot up watching it for errors or other issues.
 
Status
Not open for further replies.
Top