ESXi 5.1u1 iSCSI LUN connection problem

Status
Not open for further replies.

Tweexter

Cadet
Joined
Jun 26, 2013
Messages
9
I think there's a slight chance I'm going crazy here, with the amount I've searched for an answer to this.
I've made a ZFS raid volume on FreeNAS (tried both file and device extents) based on various guides on the web, tutorials, etc. I've made the iSCSI target, mapped extent to the target, all that fun stuff. ESXi host can see the target and even report the size of the LUN, however when I try to create a VMFS datastore on the LUN I get an error. Shown here:
vmware-error.jpg



Now the best I can gather from researching this error online is that VMware cannot make heads or tails of the file system and therefore will not create a VMFS on it. I'm just stumped as to how to format the FS on the freenas box into something that Vmware can actually use, and thus move on with this project. All the youtube videos I've watched on using Vmware with Freenas and iSCSI don't show anyone having this problem, and I'm pretty certain I'm following the directions correctly.

Before anyone asks,
This is not a main production environment - we have Dell EQL Sans for that
I've gone over the guide scouring for any vmware or iscsi pointers.

Many thanks for any assistance
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you SSH into the VMWare box, go to the datastore directory and try to create a file or directory will it actually create it? That would at least prove you have appropriate permissions to do what you are wanting to do.
If that works, then I have to think that the issue is with ESXi somehow.

If it doesn't work, then its a permissions issue. Mind you that the permissions issue could be a setting in FreeNAS or that ESXi is mounting the remote iscsi device as read-only or something similar.
 

Tweexter

Cadet
Joined
Jun 26, 2013
Messages
9
I don't believe I am getting that far even. When I SSH into the host and go to the volumes directory and list command, that datastore is not there. This makes sense as I cannot create the VMFS datastore on the LUN when I go to Storage > Add Storage...
I looked around in FreeNAS for any permissions. The target does have 'read-write' and I even went as far as going to the volume's permissions and checking R-W-E on all permissions. Still no dice.
 

Tweexter

Cadet
Joined
Jun 26, 2013
Messages
9
In an effort to try and narrow down the problem I've tried a few other things...
I can connect to the iSCSI target from a Win7 PC using built in Initiator, initialize the drive, create volume, all that good stuff. (still can't create VMFS volume from ESXi though)
I also tried to create the VMFS volume from a production ESXi host in our vCenter cluster, to see if this test host is just corrupt or something, and the production host was unable to as well - same error.

Very confusing...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sounds like the issue is on ESXi's side of the house. I don't have enough experience to tell you what you could try to fix the issue though.
 

Drew

Cadet
Joined
Jul 15, 2013
Messages
2
I'm kind of in the same boat... I can get the FreeNAS target to dynamically create the connection to the target, however, I cannot seem to get vCenter to actually create the connection to bring the storage up after re-scanning. No problem whatsoever connecting from a Windows 7 machine or Server 20xx platform.
 

Tweexter

Cadet
Joined
Jun 26, 2013
Messages
9
Well it's not the solution I wanted but I did find it nonetheless. Turns out there's a problem with Jumbo frames somewhere on the network. I reduced the iSCSI VMkernel port to 1500 and immediately was able to create the datastore on the freenas.
Looks like a battle for another day though... it's a Cisco 3750X switch with both nics on iSCSI vlan, set to 9000 mtu. NIC on freenas is set to 9000 as well. //shrug I give up
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There's a reason why I criticized jumbo frames here, here, here, here, and here. I'm sure there were plenty more places I said its a PITA but you get the point. Support amongst hardware and software is fractured, at best. If even one device on your network doesn't support the proper packet size(note that I did not say jumbo frames) then you will have nothing but pain and problems. One company defines "jumbo frames" as 4096 bytes, another 9000 bytes, another 9014 bytes. So who is correct? Everyone is. Everything over 1500(1514bytes if you include the frame) is a "jumbo frame" and unless you know exactly how many bytes every network port on every computer, every network switch, router, etc is set to, that they are all consistent AND YOU took into account if the frame is included in the manufacturer's size then it won't necessarily work right. Intel generally uses 4096 and 9014 bytes. But Marvell or Realtek(I forget which) use 9000 bytes. That means if you have an Intel NIC and one of those others, you will NEVER get them to work no matter how hard you try.

People need to just give up on the jumbo frames already. It sounds so great, but is far from trivial to use without issues.

For me, my network printer doesn't support anything over 1514 bytes so for as long as I own my network printer I have no choice but to continue to use standard frame size.

* - Yes, I do realize that some very expensive networking hardware can do some kind of conversion to split a network based on packet size, but even that hasn't always worked 100%.
 

Tweexter

Cadet
Joined
Jun 26, 2013
Messages
9
I was starting to wonder if 'something' was asking for 9014 instead. However at our main production environment all of the ESX hosts are on 9000 on the vSwitch and 9000 on iSCSI VMkernels, vlan is 9000, and the EqualLogic SAN is 9000 across the board, everything is happy. Same switch model too, just a big stack of them over there, single one over here. I'm still thinking the common issue is the switch because there's a QNAP 10TB Nas there as well, one NIC on iSCSI and 9000 mtu (selected from a dropdown menu in the NAS), from the ESX shell I could not ping the NAS' NIC with 9000, only with default.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
make sure everything is powercycled after setting jumbo. not just rebooted. physically de-energized. occasionally something stupid happens where some bit of all this new wondergear doesn't actually "take" a setting because of something like wol power.
 
Status
Not open for further replies.
Top