ESX6.7 iSCSI Datastore creation Fail : An error occurred during host configuration

kahlid74

Dabbler
Joined
Jun 27, 2019
Messages
14
I'm exhausted trying to troubleshoot this, and it appears to only be a FREENAS problem. I have QNAP, Synology, TCAS and Nexenta iSCSI datastores with no problems. I transitioned from a custom built server to a Dell R710 to remove any challenges/concerns around whiteboxes and VMware. I have taken this Dell R710 with six disks, controller and 10Gbe nic card, loaded Nexenta and VMware attaches without issue. Using Freenas it's the same everytime, no matter how many settings I change. Has anyone successfully connected a Freenas Server to VMware 6.7 with iSCSI and gotten it to work with VMFS 6.0? Vmware tells me nothing aside from "An Error occured during host configuration".

Any ideas? Any additional data you need me to pull to spelunk what might be happening. Apologies for being overly dramatic but I'm just straight up done trying to make this work, and would love for someone to point out something stupid I missed so I could move on.
 

veldthui

Dabbler
Joined
Nov 28, 2019
Messages
47
I have it setup and working perfectly. It was easier than setting up the iSCSI on the Synology. Using 11.3RC2 Freenas and 6.7U3 of ESXi.

Created a zvol for the storage then went to services iSCSI. Set the Target Global Config, then created a portal. Next created an initiator and a target. Finally an extent and associated target.
Once Freenas side was done I went to the Config Menu for the host and selected Storage Adapters and selected the iSCSI adapter. Added a new Dynamic Discovery with the IP of the Freenas box and rescaned the storage and it popped up. Then simply added it by using add storage and it was working. Adding to my other hosts it was just added after adding the Dynamic Discovery and doing a rescan.
 

ms308680

Cadet
Joined
Jan 26, 2020
Messages
5
um... something I am thinking of off the top of my head, but FreeNAS does warn you about it, if you want FreeNAS to handle the disks, the raid controller needs to be flashed with an IT firmware (at least for the LSI adapters) and make sure you enable pass-thru on host and add the PCI device raid card to the FreeNAS VM. You did not give many details so I am going through some basics I guess. Also when you add the PCI device to the VM for Pass-Thru, you need to Reserve all RAM to the VM or it will not either let you save it or let you boot the VM. Everything else that veldthui said above is straight forward.

Put it like this, a really simple iSCSI config, minimum:

Create a Volume in Storage
Create a zvol

then:
go to Sharing -> Block (iSCSI) (most of the defaults here will work fine)

you can leave Target alone
in Portals create one (if you use Authorized access you will need to choose that here)
go to initiators and create one
you can skip Authorized access if you want, but for security set it up here
add target (if you use Authorized access you will need to choose that here) leave delete unchecked
go to extents and 1
go to associated targets and create one and choose the Target, leave lun id the same and extent and save

then you can go to the VMware hypervisor and go to the storage adapters to configure the iSCSI adapter there and use what veldthui's info from above to finish.


hope this helps, good luck
 

kahlid74

Dabbler
Joined
Jun 27, 2019
Messages
14
Thanks for the replies.

@veldthui - That is exactly what I did and at the stage where i click to create the datastore, it bombs out in VMware with the An error occured during host configuration.

@ms308680 - Nexenta is the same way, utilizing zFS and it works as expected there. Additionally i have flashed my controller with the IT drivers. It's all gravy train there.

Any other thoughts?
 

ms308680

Cadet
Joined
Jan 26, 2020
Messages
5
Are you using Authorized Access? (If so check your Discovery Auth Method in portals and Auth Method in Target. If you are using try Chap Chap instead of one of each, also try None None as a test) On the VMware side, choose on both, Do not use chap unless required by the target)

Have you limited the network on the initiator's page? (try changing ALL ALL to test)

I have also disabled the iSCSI in VMware, rebooted the host and tried again. Maybe a change happened that requires a reboot on the VMware host, then reconfigured the iSCSI in VMware once again.

if none of the above helps, the only thing left is below.

post some screenshots of the storage config (just the volumes page is fine) and all the iSCSI settings and the iSCSI configuration in VMware, if you are using the authorized access, you can just screenshot the list screen, you don't have to edit it, you can even blank out the user name and peer name in that snapshot if you would like as well. That is all I can think of to help without seeing it. I am sorry
 

ms308680

Cadet
Joined
Jan 26, 2020
Messages
5
OH, You actually see the iSCSI device under devices? Sounds like you might be using Authorized access and it is discovering it but cannot write to it. Double-check the target settings on FreeNAS, if you have chosen an auth method, be sure to choose an Authentication Group number.
 

ms308680

Cadet
Joined
Jan 26, 2020
Messages
5
Here is a link to someone on youtube that does an OK job of showing how this is done. I do a few things different, he does the bare minimum, he runs into a few problems but what he does, does work and is 95% of what I was trying to show you. There are a few things I do extra since the FreeNAS is being hosted on itself, and I can try to find the documentation I used that I went through, I will be setting up another server in a few days if you are still having problems, I would not mind too much you watching what I do to set up a new server for FreeNAS VMware from scratch.

Let me know via direct message and I will set something up with you. Please, by all means, I am not an expert in this, I have learned this through walkthroughs and trial and error.

I do hope the above information helped you out. I look forward to hearing from you. Best of luck

here is the video I was speaking about

 

kahlid74

Dabbler
Joined
Jun 27, 2019
Messages
14
Thanks for the replies @ms308680. Because the disk is seen in VMware and attaches and creates part of the partition, it technically setup correctly and good to go. Something else is happening where the entire thing is breaking down. I'll see if I can get some screenshots.
 

kahlid74

Dabbler
Joined
Jun 27, 2019
Messages
14
Went to the ESX Host itself, and found more detail. Then went into VMKernel.log and got even more data:

Failed to create VMFS datastore SAN238 - Operation failed, diagnostics report: Unable to create Filesystem, please see VMkernel log for more details: Failed to check device /dev/disks/naa.6589cfc000000ee82b54d74ad50d8cb4:1 capable of ATS

Doing a drill deep int ATS, it comes out that VMware can't write and then access the partition table. this is because block sizing is out of whack.

So I blew away the iSCSI Extent, zVol and then Pool. After that, I Recreated it but chose a block size of default for the Pool, zVol and Extent but chose to disable block size reporting. The VMFS Datastore failed right away but in VMKernel, it was a different error this time:

2020-01-31T20:00:57.926Z cpu2:2097833)WARNING: ScsiPath: 4786: The Physical block size "131072" reported by the path vmhba64:C0:T2:L1 is not supported. The only supported physical blocksizes are 512 and 4096
2020-01-31T20:00:57.926Z cpu5:2100000)WARNING: ScsiDeviceIO: 6678: The Physical block size "131072" reported by the device naa.6589cfc000000fe48243f269f02aadc5 is not supported. The only supported physical blocksizes are 512 and 4096

So I Blew it all away, and redid it again and this time I used default block sizes but allowed it to report block size and BOOM, vmware was happy as all get out.

As an experiment, I tried 4096, which was what I originally did and it failed again. going back to 512 extent size seems to be the happy place for the latest 6.7 VMware.

What a pain in the ass but alas it's working, yayay!! ??
 
Top