Previous Post - Here
Previously had some issues with either a bad cable, bad HBA, or bad IOM on my shelf.
Dell R720xd
2x E5-2670
384gb RAM
LSI SAS-9207-8e
NetApp DS4246 (2x)
FreeNAS 11.1-U7 AND 11.2-U7
36 disk pool, 3x vdevs, raidz3
I was (still am) having numerous GPT errors with corrupted GPTs.
After reading up on Multipath configurations, this is not something I'd done previously, and somehow it was doing that automatically when I swapped in my new HBA. In addition, it was creating multipath disks that had multiple physical disks as part of one disk. I did a gmultipath destroy on all the disks that showed up, and lo and behold a pool was visible to be imported!
I ran
and received:
My volume isn't mounted, the GUI shows no pools/volumes (some of my old snapshots show up though). I have a spare disk for raidz3-0, and can get one (or use a larger disk) for raidz3-1.
Questions:
1) Do I simply just run
2) Can I mount the volume while the pool is in its current degraded state?
3) If I replace both failed disks, anything else I need to do?
4) can I disable multipathing? at one point when deleting multipaths, I did a disk rescan, and it recreated multipaths.
I intend to wipe this pool after getting it online, just treating this experience as a big learning opportunity since i've never had to deal with a failed disk on freenas. I'm still not entirely certain why I had problems in the first place but my guess is something with my old HBA, shelf, IOM, or a cable. And then when using the new HBA somehow it decided to use multipath.
Cheers and thanks for the help!
Previously had some issues with either a bad cable, bad HBA, or bad IOM on my shelf.
Dell R720xd
2x E5-2670
384gb RAM
LSI SAS-9207-8e
NetApp DS4246 (2x)
FreeNAS 11.1-U7 AND 11.2-U7
36 disk pool, 3x vdevs, raidz3
I was (still am) having numerous GPT errors with corrupted GPTs.
After reading up on Multipath configurations, this is not something I'd done previously, and somehow it was doing that automatically when I swapped in my new HBA. In addition, it was creating multipath disks that had multiple physical disks as part of one disk. I did a gmultipath destroy on all the disks that showed up, and lo and behold a pool was visible to be imported!
I ran
zpool import -f tank
and received:
Code:
root@freenas[/dev]# zpool status tank pool: tank state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub repaired 0 in 0 days 11:18:07 with 0 errors on Sun Sep 29 08:18:17 2019 config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz3-0 DEGRADED 0 0 0 gptid/e6f85169-5136-11e8-b376-bc305bf48148 ONLINE 0 0 0 731087118901044337 UNAVAIL 0 0 0 was /dev/gptid/f2bd171b-5136-11e8-b376-bc305bf48148 gptid/181f4f1a-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/24da255f-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/325a8e49-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/4a4df2c1-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/6a52ee8d-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/7bc6a747-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/8d306d6b-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/a82f0ce2-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/bb5df1ae-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 gptid/db7bb774-5137-11e8-b376-bc305bf48148 ONLINE 0 0 0 raidz3-1 DEGRADED 0 0 0 gptid/70a9ffa7-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/7485f02d-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/787153df-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/7c4f172c-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/8148a83c-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 17309881899809254171 UNAVAIL 0 0 0 was /dev/gptid/864aa9e8-96d0-11e8-a512-bc305bf48148 gptid/8a5bc22b-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/8ed0335e-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/92c028a5-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/9793ea3b-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/9c71d36c-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 gptid/a1766b2e-96d0-11e8-a512-bc305bf48148 ONLINE 0 0 0 raidz3-2 ONLINE 0 0 0 gptid/bc7a19f9-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/bd626022-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/bef543fe-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/c0e17dc6-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/c27fc9e2-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/c365fdbb-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/c46b5d52-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/c60d3684-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/c7a6bc30-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/c9416b43-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/ca343c33-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 gptid/cb3f2f2c-c4cb-11e8-b538-bc305bf48148 ONLINE 0 0 0 errors: No known data errors
My volume isn't mounted, the GUI shows no pools/volumes (some of my old snapshots show up though). I have a spare disk for raidz3-0, and can get one (or use a larger disk) for raidz3-1.
Questions:
1) Do I simply just run
zpool replace da33 da<newdisk>
and wait? The manual has steps for replacing a failed disk in the GUI, but I don't have a pool in my GUI.2) Can I mount the volume while the pool is in its current degraded state?
3) If I replace both failed disks, anything else I need to do?
4) can I disable multipathing? at one point when deleting multipaths, I did a disk rescan, and it recreated multipaths.
I intend to wipe this pool after getting it online, just treating this experience as a big learning opportunity since i've never had to deal with a failed disk on freenas. I'm still not entirely certain why I had problems in the first place but my guess is something with my old HBA, shelf, IOM, or a cable. And then when using the new HBA somehow it decided to use multipath.
Cheers and thanks for the help!