iSCSI (fibre channel) reported block size and ESXi 6.7

Status
Not open for further replies.

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I just ran across an odd issue when playing with record sizes and testing UNMAP with vSphere 6.7. I tried to create a zvol backed LUN with a record size of 128K as shown below. When setting up the LUN I left the physical block size reporting ON and was unable to format the LUN with VMFS. I checked the host logs and found the message below. I thought ok but wait a second, my main LUN, zvol data/vmblock has physical block size reporting ON and was (still is) fine and that's a record size of 16K! So I'm a little lost on this one and looking for any insight someone may have. On the same subject, are there any recommended options for the best performance with VMFS 6?

vmkwarning.log error:
Code:
The Physical block size "131072" reported by the path vmhba1:C0:T0:L3 is not supported. The only supported physical  blocksizes are 512 and 4096

zvol block sizes:
Code:
FreeNAS# zfs get volblocksize data/vmblock
NAME		  PROPERTY	  VALUE	 SOURCE
data/vmblock  volblocksize  16K	   -
FreeNAS# zfs get volblocksize data/UNMAP_TEST
NAME			 PROPERTY	  VALUE	 SOURCE
data/UNMAP_TEST  volblocksize  128K	  -

Not working:
Disable Physical Block Size Reporting ENABLED
Code:
lun "UNMAP_TEST" {
		ctl-lun 3
		path "/dev/zvol/data/UNMAP_TEST"
		blocksize 512
		serial "782bcb42394603"
		device-id "iSCSI Disk	  782bcb42394603				 "
		option vendor "FreeNAS"
		option product "iSCSI Disk"
		option revision "0123"
		option naa 0x6589cfc000000c3b3ff114fb27684794
		option insecure_tpc on
		option pool-avail-threshold 9565751161651
		option rpm 10000
}


Working:
Disable Physical Block Size Reporting DISABLED
Code:
lun "UNMAP_TEST" {
		ctl-lun 3
		path "/dev/zvol/data/UNMAP_TEST"
		blocksize 512
		option pblocksize 0
		serial "782bcb42394603"
		device-id "iSCSI Disk	  782bcb42394603				 "
		option vendor "FreeNAS"
		option product "iSCSI Disk"
		option revision "0123"
		option naa 0x6589cfc000000c3b3ff114fb27684794
		option insecure_tpc on
		option pool-avail-threshold 9565751161651
		option rpm 10000
}

 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
ctl.conf vmblock section:
Code:
lun "vmblock" {
		ctl-lun 0
		path "/dev/zvol/data/vmblock"
		blocksize 512
		serial "90e2ba7cedbc00"
		device-id "iSCSI Disk	  90e2ba7cedbc00				 "
		option vendor "FreeNAS"
		option product "iSCSI Disk"
		option revision "0123"
		option naa 0x6589cfc0000000aa77133050ff96e085
		option insecure_tpc on
		option avail-threshold 1759218604441
		option pool-avail-threshold 9565751161651
		option rpm 1
}

 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
On a side note, I have verified the UNMAP works on the VMFS level with sparse zvols. Now I need to test from a guest OS and with sparse file backed LUNs.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I hate double negatives.

Are you editing the ctl.conf by hand for use with FC LUNs or using the GUI? Setting option pblocksize 0 is the same as enabling the "Disable Physical Block Size Reporting" option in the GUI, so your examples in the first post should be flipped, no?

Although that doesn't explain why the vmblock LUN, which is reporting its physical block size of 16K, is working. Unless your SSDs are somehow reporting their physical block size (of 4K, or 8K?) through ...
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I hate double negatives.
Me too. I had to re-read that 5 times before I hit post.
Are you editing the ctl.conf by hand for use with FC LUNs or using the GUI?
This is all done in the GUI.
Although that doesn't explain why the vmblock LUN, which is reporting its physical block size of 16K, is working.
This is why I'm puzzled.
Unless your SSDs are somehow reporting their physical block size (of 4K, or 8K?) through
This is all backed by the same pool of 8 3TB 7200 RPM disks arranged as striped mirrors.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
This is all backed by the same pool of 8 3TB 7200 RPM disks arranged as striped mirrors.

This line right here:

Code:
option rpm 1


Is telling the initiator that the "vmblock" LUN is backed by SSD (0=don't report, 1=SSD, >1000=actual spindle RPM) - I know that FreeNAS used to present all LUNs as being SSDs to avoid Windows trying to defragment them.

Perhaps somehow VMware is more tolerant of what it thinks is an SSD with a weird physical block size. Try setting option rpm 1 on "UNMAP_TEST" and see if it suddenly works.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
This line right here:

Code:
option rpm 1


Is telling the initiator that the "vmblock" LUN is backed by SSD (0=don't report, 1=SSD, >1000=actual spindle RPM) - I know that FreeNAS used to present all LUNs as being SSDs to avoid Windows trying to defragment them.

Perhaps somehow VMware is more tolerant of what it thinks is an SSD with a weird physical block size. Try setting option rpm 1 on "UNMAP_TEST" and see if it suddenly works.
Thanks, Ill give it a shot later today.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I wouldn't call 128gb of ram and 12 physical cores per host small ;)
No this is not fully nailed down. I'm now having issues whith paths randomly dropping out (Timeouts). I don't think it's the cards as one LUN will be fine but another will only have one working path... I need to dig into the CTL logs and perhaps step up the logging.:(
 
Status
Not open for further replies.
Top