I have a FreeNAS, which I use as a "Time Machine" for the mac in the house, as well as video storage, and general backups etc. It uses an 8GB USB stick to boot, and four*750GB HDDs configured as a raidZ1 pool.
I have more HDDs available, but limited SATA ports. So I bought an 8 port card (dirt cheap from ebay) and decided to switch over to using it - with the intention of changing PC case and adding extra drives later.
Oh dear.
I shouldn't have done that.
I *should* have tried all new drives in the new controller, but I didn't, I used my existing pool drives.
I don't know what the controller has done to them, but now they aren't recognised as the pool any more :(
System:
Build FreeNAS-9.1.0-RELEASE-x64 (dff7d13)
Platform AMD Athlon(tm) II X4 640 Processor
Memory 7915MB
After a while sulking I went searching, and found http://serverfault.com/questions/297029/zfs-on-freebsd-recovery-from-data-corruption which looks similar enough to give me some optimism.
Disturbingly this only lists 3 HDDs, despite all four having been OK previous to the reboot. I also wonder if this is reading some zfs cache file, rather than actual disks...
That doesn't look good either...
Is there any likelihood of recovering this mess?
Any pointers as to the right direction to take from here? Starting to get completely disillusioned with computers again :(
I have more HDDs available, but limited SATA ports. So I bought an 8 port card (dirt cheap from ebay) and decided to switch over to using it - with the intention of changing PC case and adding extra drives later.
Oh dear.
I shouldn't have done that.
I *should* have tried all new drives in the new controller, but I didn't, I used my existing pool drives.
I don't know what the controller has done to them, but now they aren't recognised as the pool any more :(
System:
Build FreeNAS-9.1.0-RELEASE-x64 (dff7d13)
Platform AMD Athlon(tm) II X4 640 Processor
Memory 7915MB
After a while sulking I went searching, and found http://serverfault.com/questions/297029/zfs-on-freebsd-recovery-from-data-corruption which looks similar enough to give me some optimism.
Code:
[root@FreeNAS] ~# zdb Data: version: 5000 name: 'Data' state: 0 txg: 4956163 pool_guid: 15585826249507765244 hostid: 2429217988 hostname: '' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 15585826249507765244 create_txg: 4 children[0]: type: 'raidz' id: 0 guid: 5708209934565116186 nparity: 1 metaslab_array: 34 metaslab_shift: 34 ashift: 12 asize: 2244012146688 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 6161345816377449791 path: '/dev/gptid/64f8ba3a-1029-11e3-8a56-14dae9eaddf1' phys_path: '/dev/gptid/64f8ba3a-1029-11e3-8a56-14dae9eaddf1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 8929158312757003696 path: '/dev/gptid/6560d59b-1029-11e3-8a56-14dae9eaddf1' phys_path: '/dev/gptid/6560d59b-1029-11e3-8a56-14dae9eaddf1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 2814370025582819070 path: '/dev/gptid/65c95d49-1029-11e3-8a56-14dae9eaddf1' phys_path: '/dev/gptid/65c95d49-1029-11e3-8a56-14dae9eaddf1' whole_disk: 1 create_txg: 4 features_for_read:
Disturbingly this only lists 3 HDDs, despite all four having been OK previous to the reboot. I also wonder if this is reading some zfs cache file, rather than actual disks...
Code:
[root@FreeNAS] ~# zdb -lll /dev/ada0 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3
That doesn't look good either...
Is there any likelihood of recovering this mess?
Any pointers as to the right direction to take from here? Starting to get completely disillusioned with computers again :(