SOLVED zVol iSCSI to ESXi 6.5 - Unable to create datastore (ATS error?)

Status
Not open for further replies.

Isuress

Dabbler
Joined
Oct 11, 2016
Messages
14
Hello FreeNAS forums; it's been some time since I've had to post here. I guess in most cases that would be a good thing, lol.

So I'm having an issue with regards to my FreeNAS SSD storage zpool that is being sent over iSCSI to my ESXi 6.5 hosts.
I have a zPool of three 256gig SSDs that are striped (RAID0). I have 3 zVols in this pool; each with different block size (I was going to test VM performance of each block size to see which was the best fit - though Google research shows that 64K blocks seems to be the best overall fit, but we'll see if the results are the same for me)

I have 3 hosts. I attempted iSCSI on host #1 and host #2.
Host #1 is able to find FreeNAS, it can see the LUNS but upon connecting to the LUNs, it times out. I am unable to make a datastore.
Host #2 is able to find FreeNAS, it's even able to connect. The issue arises when I go to make a datastore.
Weird things start to show up in the wizard. For example, I'm able to see the 3 LUNs I've presented for iSCSI. They show up as the 100gig capacity I've set for them, but when I continue the wizard to the partition segment, it only shows 50gigs out of the 100gig? Though completing the wizard ends in an error regarding ATS.
What's weird is that if I redo the datastore wizard on a LUN that the wizard failed on, it will tell me that a partition already exists and that it would be reformatting the disk.
That means that Host #2 is able to format the LUN, but then fails with ATS when it tries to add the datastore to the list of available storage.
I've done a lot of Googling on and off the last week or so, but haven't found anything conclusive with relation to the ATS error.

Please click on the hyperlinks to see pictures of the errors, etc.

What other information can I give that can help diagnose the issue?
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
What version of FreeNAS are you running? Be sure to disable physical block size reporting on your FreeNAS extent and also be sure that the logical block size is set to 512. This is the only config ESXi currently supports (4K physical blocks are supported on ESXi 6.5 but I wouldn't use that size on FreeNAS).
 

Isuress

Dabbler
Joined
Oct 11, 2016
Messages
14
What version of FreeNAS are you running? Be sure to disable physical block size reporting on your FreeNAS extent and also be sure that the logical block size is set to 512. This is the only config ESXi currently supports (4K physical blocks are supported on ESXi 6.5 but I wouldn't use that size on FreeNAS).

Hello there,

I'm currently running "FreeNAS-9.10.2-U1 (86c7ef5)" or the latest version.
These are what my extent settings are for each of the 3 extents.
Part of my testing actually had to do with using 1024 logical block size over the typical 512. (This is due to seeing the difference between 1mb, 2mb, 3mb, etc sizes provided by ESXi - higher lets you VM HDDs to be larger)
This was due to ESXI 6.5 supporting higher than 512 blocks. Though you're saying ESXi 6.5 will only support EXACTLY 512 OR 4096, and nothing in-between?

Also, why do you recommend disabling "physical block size reporting"? I'll do it, I'm just curious as to the reasoning.
Is it not supported at all and will ONLY work if it's disabled? If that's the case, any reason why?

So I'll change the logical block 1024 extents to 512 and 4096 and test those maybe? As well as disabling block reporting.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
These settings have nothing to do with the VMDK of your VM's. This is what the iSCSI initiator and ESXi storage core support. Just set your extent to 512 logical and disable physical block size reporting if you want it to work correctly.
 

ricardonadao

Cadet
Joined
Nov 10, 2015
Messages
8
@bigphil is right.

If you check the VMkernel log of the esxi, you will see it complain of the block size (there is 2 support sizes in 6.5, I remember 4K ... the other can't remember and lazy to go through the logs)

You have 2 options:
- Do what @bigphil mention, hiding the physical block size of the zvol when setting up the iSCSI extent
or
- Setup the zvol with 4K block and then leave 512 in the iSCSI extent, and keep the "physical block size reporting" enabled

I used the 2nd option and seems all fine, but both options should be ok to use.

Hope this helps.
 

Isuress

Dabbler
Joined
Oct 11, 2016
Messages
14
These settings have nothing to do with the VMDK of your VM's. This is what the iSCSI initiator and ESXi storage core support. Just set your extent to 512 logical and disable physical block size reporting if you want it to work correctly.

The extent size could potentially; depending on whether or not it changes the block size you're allowed to pick when formatting the datastore. Depending on if you use 1, 2 , 3, or 4 block size the maximum size of your VMDK can change. Anything, I took your advice and made the chances. It works at the moment though I don't know why.
Thank you though.

@bigphil is right.

If you check the VMkernel log of the esxi, you will see it complain of the block size (there is 2 support sizes in 6.5, I remember 4K ... the other can't remember and lazy to go through the logs)

You have 2 options:
- Do what @bigphil mention, hiding the physical block size of the zvol when setting up the iSCSI extent
or
- Setup the zvol with 4K block and then leave 512 in the iSCSI extent, and keep the "physical block size reporting" enabled

I used the 2nd option and seems all fine, but both options should be ok to use.

Hope this helps.

I really need to finish setting up my syslog server so I can look at all these logs without having to do any digging.
Thank you for elaborating on my options. It's interesting to know that I CAN have "block size reporting" enabled.
I'll potentially test these settings as well to see if they make a difference. Thanks dude!
 
Status
Not open for further replies.
Top