Zvol block size+Zpool sector size+ISCSI extent logical block size

Status
Not open for further replies.

jamesx

Cadet
Joined
Nov 18, 2018
Messages
9
I have created a Raidz1 Zpool with 3x4TB on a 16GB RAM server. I am using HDD but will define the LUN as an SSD for VMware Esxi 6.5 initiator

I am new to Freenas and would like to ask for your help to build my server. I am reading a lot about sector size and block size and it's quite confusing to me. I know that 4k Zpool sector size is a better way to go but I am not sure what should I set on zvol and iscsi extent?

For a 4k drive should I set...
zvol block size = 4k or just use the default?
iscsi extent = 512 or 4096?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I am using HDD but will define the LUN as an SSD for VMware Esxi 6.5 initiator
Any reason for this?
I know that 4k Zpool sector size
forget about the "zpool sector size" The main thing you get to tweak is the ZFS record/block size and this has (almost) nothing to do with the underlying drives. If there was a "best" answer, it would already be set to this. For the zvol, it entirely depends on the anticipated access patterns. Lots of small IO (think databases)? use a similarly small block size. Large IO (think backups), use the biggest option (128k). Storing virtual machine OSs (Think C drive) use 32k or 64k. IO is read and write in the block size (this is just a simplification). If your working with small chunks of data, you dont want to have to read 128k just to get 4k of data. Conversely, you don't want to read 32 4k hunks just to get 128k of data.

iscsi extent = 512 or 4096?
In VMware 6.5 this should be set to 512 for compatibility reasons. You could use 4096 aka 4k but it wont make a difference.
 

jamesx

Cadet
Joined
Nov 18, 2018
Messages
9
Any reason for this?

Sorry, i have a previous post about this and you guys have helped me to decide which route should i go, so i go using SSD and disable the defragmentation on a Windows guest OS.

forget about the "zpool sector size" The main thing you get to tweak is the ZFS record/block size and this has (almost) nothing to do with the underlying drives. If there was a "best" answer, it would already be set to this. For the zvol, it entirely depends on the anticipated access patterns. Lots of small IO (think databases)? use a similarly small block size. Large IO (think backups), use the biggest option (128k). Storing virtual machine OSs (Think C drive) use 32k or 64k. IO is read and write in the block size (this is just a simplification). If your working with small chunks of data, you don't want to have to read 128k just to get 4k of data. Conversely, you don't want to read 32 4k hunks just to get 128k of data.

I am using HGST HUS724040ALA640 disk, unfortunately freenas detected my 3x4TB disks as 512 logical/physical so I guess I would not achieve 4k performance. Can I manually do ashift=12 to fix this?

I am looking to have performance wise setup and going to use Freenas with a VM that will perform backups and VMs for some other things. What zvol block size would be okay?
 

jamesx

Cadet
Joined
Nov 18, 2018
Messages
9
I actually assumed that freenas will detect my drive as 4k but after I run camcontrol identify just now it showed up that its not.
 

Attachments

  • camcontrol identify.jpg
    camcontrol identify.jpg
    58.6 KB · Views: 887

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
FreeNAS only knows what the drive tells it.
 

jamesx

Cadet
Joined
Nov 18, 2018
Messages
9
Yes you are right kdragon75, after searching in the internet about this drive, its seems like its not really 4k but a 512 native instead.

I guess I have to accept it. Could I ask for a help how to have a decent performance for this 512n drive?


*I am setting up the freenas on a hosted server. The hardware is preconfigured so I would not be able to replace the disks. I probably going to have another hardware next year with 4k drives in it then perform data migration to fix this issues.
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Yes you are right kdragon75, after searching in the internet about this drive, its seems like its not really 4k but a 512 native instead.

I guess I have to accept it. Could I ask for a help how to have a decent performance for this 512n drive?


*I am setting up the freenas on a hosted server. The hardware is preconfigured so I would not be able to replace the disks. I probably going to have another hardware next year with 4k drives in it then perform data migration to fix this issues.
4kn drives are about capacity not performance. I can't help tune performance without knowing either exactly the type of performance you want or the exact workload. And no "some VMs on ESXi" is not a discription of a work load.
 

jamesx

Cadet
Joined
Nov 18, 2018
Messages
9
Hi so I have attached freenas to vmware as iscsi succesully and I encountered issues.

*Showing Normal - Degraded
-I understand that this issue is about having a single path with host and san.

*The Physical block size "16384" reported by the path vmhba64:C0:T0:L0 is not supported. The only supported physical blocksizes are 512 and 4096 in vmkernel log
-I disabled the phyiscal block size reporting in iscsi extent. The changes did not do anything, it was still logging.
-I rebooted Freenas and waited until it has been initialized, clicked rescan ISCSI in vmware but did not detect Freenas. I checked vmware and found out that the datastore I created from Freenas LUN was gone missing.
-I rebooted vmware then checked the connections between vmware and freenas but still no connections. So I rebooted again and this time vmware and freenas are now seeing its other.

My question is, why vmware would not detect Freenas when Freenas was rebooted the first time. Is there a script that I have to put in freenas so it will connect to vmware automatically.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
*Showing Normal - Degraded
-I understand that this issue is about having a single path with host and san.
This is correct.
*The Physical block size "16384" reported by the path vmhba64:C0:T0:L0 is not supported. The only supported physical blocksizes are 512 and 4096 in vmkernel log
-I disabled the phyiscal block size reporting in iscsi extent. The changes did not do anything, it was still logging.
Once a LUN is detected, VMware will not "detect" a "physical" block size change. There may be some esscli magic to forcefully rescanning the LUNs but a reboot works fine. It is not normal to expect a physical block size to change on a lun under any normal circumstance.
-I rebooted Freenas and waited until it has been initialized, clicked rescan ISCSI in vmware but did not detect Freenas. I checked vmware and found out that the datastore I created from Freenas LUN was gone missing.
I would have to see you ESXi vmkernel log to see what happened. Perhaps you were in the middle of a timeout to the old connection? Just a guess.
-I rebooted vmware then checked the connections between vmware and freenas but still no connections. So I rebooted again and this time vmware and freenas are now seeing its other.
Same as above.
My question is, why vmware would not detect Freenas when Freenas was rebooted the first time. Is there a script that I have to put in freenas so it will connect to vmware automatically.
FreeNAS does not connect to ESXi, ESXi connect to FreeNAS using the iSCSI protocol. You should look into the VMware documentation and troubleshooting guides. ESXi is a complex and advanced software that requires in depth understanding of all related systems to troubleshoot.
 

jamesx

Cadet
Joined
Nov 18, 2018
Messages
9
I would have to see you ESXi vmkernel log to see what happened. Perhaps you were in the middle of a timeout to the old connection? Just a guess.

Yes it showed a message timeout but the timeout stayed even freenas was up and running. Tried to detach freenas but vmware would not let me, so i restarted the server. I have attached the vmkernel log

FreeNAS does not connect to ESXi, ESXi connect to FreeNAS using the iSCSI protocol.

Yes I agree, vmware tried many times to connect freenas.
 

Attachments

  • vmkernel-logs.txt
    64.1 KB · Views: 1
Status
Not open for further replies.
Top