SOLVED ZVOL reported size in ESXi

Status
Not open for further replies.

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Hello guys,

I got a quick question, because I was not able to figure it out by myself.
So I have a pool on SSD only for datastores. Within this I have a ZVOL called Poseidon-SSD, with a compression ratio of 2.09x and a size of around 55GB. On the ESXi side, where I connect via iSCSI to FreeNAS/ZVOL.
Now ESXi reports me the size of the ZVOL of available 500GB (thats correct) and the used size of 410GB. This is incorrect in two ways.
1. On the ZVOL only 55GB is occupied. How do I menage to push this also to ESXi?
2. Even with compression of 2.x Its might be about 120GB and not 410GB in size. How could this happen?

Has anyone some ideas or experience in this?
I don't wont to overprovision the SSDs as it is a really bad idea....

Rgds,
Ice.
 

Attachments

  • paradox.png
    paradox.png
    30 KB · Views: 216

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
What version of ESXi are you running? There are a few things to check. First, make sure you're using a device extent and not file extent in FreeNAS for your lun. Because it's block storage, FreeNAS doesn't know what happens inside the lun so it doesn't know when files are deleted and space cleared. You need to run space reclamation/UNMAP on the VMFS datastore that sends SCSI UNMAP commands back to the storage to clear the space. There is also support for in-guest UNMAP but there are requirements for that (OS support, ESXi host enabled for it, correct allocation unit size). In my testing for guest UNMAP support in Windows 2012+, the NTFS allocation unit size has to be 32K or higher in the guest. Support has been added to Linux as well, but you need to check which flavors and versions support it.
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
What version of ESXi are you running? There are a few things to check. First, make sure you're using a device extent and not file extent in FreeNAS for your lun. Because it's block storage, FreeNAS doesn't know what happens inside the lun so it doesn't know when files are deleted and space cleared. You need to run space reclamation/UNMAP on the VMFS datastore that sends SCSI UNMAP commands back to the storage to clear the space. There is also support for in-guest UNMAP but there are requirements for that (OS support, ESXi host enabled for it, correct allocation unit size). In my testing for guest UNMAP support in Windows 2012+, the NTFS allocation unit size has to be 32K or higher in the guest. Support has been added to Linux as well, but you need to check which flavors and versions support it.
Hi,

I am running running latest version of ESXi 6.5
The volume I have created as a datastore is using VMFS6.
But does UNMAP ha to happen on the ESXi-host? I think on virtual machines its hard to archive because of all the different OS types.
Stupid question: Where can I chose if FreeNAS should choose "device extent and not file extent"?

Thank you!
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
There are only two ways to do it...via the datastore space reclamation (automatic and low priority for ESXi 6.5) and via guest UNMAP. There was a bug fix a while back that fixed an issue with VMware tools for unmap granularity so any allocation unit size will now work, although the most savings is still seen if using 32K or 64K. Make sure your ESXi host is fully patched with the latest updates. Check your VMFS 6 datastore settings to make sure the space reclamation settings are ok. It's on by default, but the datastore has to be new...it can't be upgraded from a previous version of VMFS! Check this link for more info on guest UNMAP. Also...I think if you setup your extent with a zvol, it's device based no matter what...I think.
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
There are only two ways to do it...via the datastore space reclamation (automatic and low priority for ESXi 6.5) and via guest UNMAP. There was a bug fix a while back that fixed an issue with VMware tools for unmap granularity so any allocation unit size will now work, although the most savings is still seen if using 32K or 64K. Make sure your ESXi host is fully patched with the latest updates. Check your VMFS 6 datastore settings to make sure the space reclamation settings are ok. It's on by default, but the datastore has to be new...it can't be upgraded from a previous version of VMFS! Check this link for more info on guest UNMAP. Also...I think if you setup your extent with a zvol, it's device based no matter what...I think.
Hi, datastoreds are new, just created with new VMFS6, not upgraded. I did not change anything iso this is default.
Should I post maybe the output from the esxi datastore?
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
You can check to see if you have any UNMAP errors from the ESXi host. ssh to the host and run the below commands:
esxtop
u
f (choose a and o) and press enter.

upload_2017-10-28_12-21-53.png


You'll now see the VAAI stats for the disks. You'll want to pay attention to DELETE and DELETE_F. These are the unmap stats. If you have a lot of DELETE_F then something is wrong with your setup. So get this console up, put some test files on the datastore and then delete them. You have to delete them from the vSphere web client (not from some other utility as they may not send proper unmap commands to the ESXi host). If you have a guest that meets the requirements for in-guest UNMAP support and your host is configured for it, you can delete files from within the guest to test as well.
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Hello bigphil,

now I got some time to do this. Thanks for your answer.
I have attached the esxitop output, from this point all looks fine.
esxi top.png

I have also attached a picture of the ESXi view, where you can see how much space is used.
volumes.png

If you have any other idea, you're welcome :)

Edit: I moved over to CLI and navigate into the datastore. With "du -sh *" I have listed the size of the folders, and it ssems like all virtual volumes are listed with the full size like:
Code:
20.0G  CentOS 7

even if thin-provision volumes are created.
I do not understand how this could happen from ESXi side.
So I looked into the VMs information.
As you can see in the image
vm 2.png

It says Thin provisioned "NO".
When I go into the VMs options:
vm 1.png

It says in english: With Thin-provisioning provided, strongly zeroed.
But I am sure I have created these one all with thin-provision.
Maybe this is the overall fault here? :/

Edit 1000:
I have moved this datastore when I moved from HDD to SSD. ESXi has the problem, that when the datastore moved, the disks are not created as thin disks, they are created THICK. So they need all to be re-created with this helpful doc:
https://theitbros.com/convert-thick-provision-lazy-zeroed-disk-to-thin-vmware-esxi/
 
Last edited:
Status
Not open for further replies.
Top