SOLVED vmfs partitions gone

Status
Not open for further replies.

godfather007

Dabbler
Joined
Mar 1, 2018
Messages
12
Hi,

New at FreeNAS (11.1u1). I have a terrible experience at the moment.

Just migrated from OMNIOS (Fibre Channel) with nappit to FreeNAS (iSCSI).

Created a few MPIO channels (vlan168:192.168.168.128, vlan169:192.168.169.128, vlan170:192.168.170.128, vlan171:192.168.171.128) to provide round-robin.

The storage came visible and migrated away from OMNIOS.

At a certain point I realised that LUNs were "auto" and provided them with a static LUN-ID to avoid "auto-magic".
After that the problems appeared and the (ESX) datastores became unavailable.

Now I am at the point that the LUNs are detected as "device" but not as datastore. After a rescan there appears an event "datastore unknown".

I found an article https://kb.vmware.com/s/article/2046610 to recover partition tables, but the vmkernel.log shows that only partitions with 512 / 4096 bytes can be mounted.

Is there anything known i could try? It is a home-situation.

Thanks in advance.

Martijn
 
Last edited by a moderator:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That *is* a terrible situation. It is never a good idea to try things like this with a filer that has production data on it. The best advice is to start rewinding your steps until things become visible again, assuming no permanent changes have been written and that it is actually recoverable. Trying to force your way forward carries more chance of permanent damage.

In the future, evacuate the datastores to other storage, destroy the pool, create everything from new in FreeNAS using a tested configuration, and do not try to change OS or anything like that. SAN style configurations are extremely delicate, and while "big changes" should work in theory, real world experience is somewhat different. There are issues at multiple levels that are not always trivial to resolve, so it's better just to create the config you want and stick with it.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Sounds like you may need to resignature your LUNs as the LUN id likely no longer matches what's inside the VMFS metadata. Follow this VMware KB for help.
 

godfather007

Dabbler
Joined
Mar 1, 2018
Messages
12
Thanks all for your replies.

Found a message about :
The Physical block size “16384” reported by the device naa.6589cfc000000efd6a7a9ca3873eb79a is not supported. The only supported physical blocksizes are 512 and 4096
https://blog.robpatton.com/category/virtualization-vmware-freenas/

Another one that lead me to connect to it:
LVM: 11136: Device naa.6589cfc000000efd6a7a9ca3873eb79a:1 detected to be a snapshot:

http://www.virten.net/2016/11/usb-devices-as-vmfs-datastore-in-vsphere-esxi-6-5/

I have access again! :smile:

Anyone knows if this is an iscsi thing?
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Anyone knows if this is an iscsi thing?
What do you mean...an iSCSI thing? ESXi only supports 512 or 512e bytes per sector disks for iSCSI storage at this time, that is why you have to disable physical block size reporting. If you don't, it reports back the block size you set on your zvol (for device based extents). As for your error message "detected to be a snapshot", that's exactly what the article I previously linked you to talks about and shows you how to correct it. That was a mistake on your side, not necessarily a problem with iSCSI.
 

godfather007

Dabbler
Joined
Mar 1, 2018
Messages
12
Yes, of course it's an operational error from my side, although it is very easy to overcome.

When the default zvol and iscsi extent is created, these are the settings.
Apparently it communicates the zfs block size to the host. At first it just swallows the parameters but as soon you mangle with the lun IDs , like I did, the VMware spits out the lun in an inaccessible state.
Painfull when you just moved 10tb to it.

Second link to fix that, thanks again. Thinking about my Fibre channel setup with nappit. Never had problems this easy with the iscsi target from Solaris.
 
Status
Not open for further replies.
Top