Hello everybody,
I have a zpool that is unknown and unable to import:
Situation: I have been using 3x Dell R510 H700 with 12x 4TB sata drives for backup purpose only for years, never had any problem until now. The new system is a Dell R720xd with H710p and 24x 4TB sata drives, RAID60 in single volume. The first stipe had some errors and is in Partially Degraded state. Ever since from there, the FreeNAS will throw me an error saying the zpool is with error but can be cleared everytime it boots. So I did clear the error everytime until yesterday I forgot to clear the error, did some read/write then shut it down. Now the zpool won't import.
So my question is is there any fix to this? It's very unlikely the logical drive has hardware bad blocks since it's still protected by H710p RAID60. So I guess its data is somehow messed.
And here comes another question: some of my data was migrated from another server which snapshots are set to keep only 2w long, means the old snapshots are deleted automatically. Is there any possiblity to bring those back?
Thank you all in advance.
--------
Update 0: I am currently running FreeNAS 11.1-U5
Update 1: if I do zpool import -F PERC_RAID, it shows:
Update 2: if I do zdb -l /dev/da0p2 (which is the device where zfs volume is located), it shows:
I have a zpool that is unknown and unable to import:
Code:
cannont import 'PERC_RAID': I/O error Destroy and re-create the pool from a bakcup source.
Situation: I have been using 3x Dell R510 H700 with 12x 4TB sata drives for backup purpose only for years, never had any problem until now. The new system is a Dell R720xd with H710p and 24x 4TB sata drives, RAID60 in single volume. The first stipe had some errors and is in Partially Degraded state. Ever since from there, the FreeNAS will throw me an error saying the zpool is with error but can be cleared everytime it boots. So I did clear the error everytime until yesterday I forgot to clear the error, did some read/write then shut it down. Now the zpool won't import.
So my question is is there any fix to this? It's very unlikely the logical drive has hardware bad blocks since it's still protected by H710p RAID60. So I guess its data is somehow messed.
And here comes another question: some of my data was migrated from another server which snapshots are set to keep only 2w long, means the old snapshots are deleted automatically. Is there any possiblity to bring those back?
Thank you all in advance.
--------
Update 0: I am currently running FreeNAS 11.1-U5
Update 1: if I do zpool import -F PERC_RAID, it shows:
Code:
cannot import 'PERC_RAID': one or more devices is currently unavailable
Update 2: if I do zdb -l /dev/da0p2 (which is the device where zfs volume is located), it shows:
Code:
-------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'PERC_RAID' state: 0 txg: 487298 pool_guid: 12288433215471904550 hostid: 3438666489 hostname: '' top_guid: 13006281068906994673 guid: 13006281068906994673 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 13006281068906994673 path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2' whole_disk: 1 metaslab_array: 38 metaslab_shift: 39 ashift: 12 asize: 88002801172480 is_log: 0 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data -------------------------------------------- LABEL 1 -------------------------------------------- version: 5000 name: 'PERC_RAID' state: 0 txg: 487298 pool_guid: 12288433215471904550 hostid: 3438666489 hostname: '' top_guid: 13006281068906994673 guid: 13006281068906994673 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 13006281068906994673 path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2' whole_disk: 1 metaslab_array: 38 metaslab_shift: 39 ashift: 12 asize: 88002801172480 is_log: 0 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data -------------------------------------------- LABEL 2 -------------------------------------------- version: 5000 name: 'PERC_RAID' state: 0 txg: 487298 pool_guid: 12288433215471904550 hostid: 3438666489 hostname: '' top_guid: 13006281068906994673 guid: 13006281068906994673 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 13006281068906994673 path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2' whole_disk: 1 metaslab_array: 38 metaslab_shift: 39 ashift: 12 asize: 88002801172480 is_log: 0 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data -------------------------------------------- LABEL 3 -------------------------------------------- version: 5000 name: 'PERC_RAID' state: 0 txg: 487298 pool_guid: 12288433215471904550 hostid: 3438666489 hostname: '' top_guid: 13006281068906994673 guid: 13006281068906994673 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 13006281068906994673 path: '/dev/gptid/ca91d306-695f-11e8-8be0-f8bc1246aad2' whole_disk: 1 metaslab_array: 38 metaslab_shift: 39 ashift: 12 asize: 88002801172480 is_log: 0 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data
Last edited: