HDD without partition schema after proxmox migration fail

padi

Cadet
Joined
Sep 20, 2020
Messages
3
Hello,

I have a small setup with 2 proxmox nodes and one freenas storage - zfs over iscsi. Today I tried to live migrate a vm and from a node to another but the migration fail and the main hdd fail. Fail = it is not recognized anymore as a Linux HDD. I tried to boot CentOS in Troubleshooting mode and select Repair CentOS Installation but I got the "You don't have any linux partitions" msg. Then I tried in many ways (fsck, testdisk, ddrescue, dumpe2fs -h /dev/sdb), I also try to backup the partitions meta from a healthy hdd and restore on the damaged one ( because it's the same installation ) but without success.

I can't explain what happened. I just try to migrate the vm but the migration fail and after that centOS throw some error ( I don't remember exactly but was something about filesystem ) and I reboot the vm. After reboot the vm was stuck at grub. It's like the hdd is empty.

Is there any chances to get my files back? :)

Thanks,
Adrian
 

padi

Cadet
Joined
Sep 20, 2020
Messages
3
I check log files and the 'error' output was 'iscsiadm: No session found.'
 

padi

Cadet
Joined
Sep 20, 2020
Messages
3
some outputs:

$ xfs_repair /dev/sdb
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
..................................................................................................................................................................................................................................................found candidate secondary superblock ....................................................................................................................................................................................................................................................

$ dumpe2fs /dev/sdb
dumpe2fs 1.42.9 (28-Dec-2013)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb
 
Top