Wrong space left in Vsphere storage. iSCSI - ZFS

Status
Not open for further replies.

mauroreggio

Dabbler
Joined
Jun 30, 2014
Messages
13
Hi all and thank you for your attention.

My configuration is simple:
FreeNAS 9.2.1.5 as iSCSI storage on VmWare
6 x SAS disks and a ZFS volume that is configured as the iSCSI target.

I've a problem in volume space: VmWare indicate 700 GB of free space on the 1,5 TB volume (the one that i describe in my configuration) ... bur FreeNAS indicate me a 97% of used space and go in an alert mode.

I made many operation in this storage volume: create, move, delete machines ... it's like the deleted file are never been deleted. But i not see the deleted machines in the "browse datastore" (and this is correct).

Is this a know issue?

Thanks,
Mauro.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, it's a known issue called PEBCAK.

First off., when you're using iSCSI, when you create your iSCSI datastore, that is essentially creating a single large file or volume filled with zeroes. That space is allocated. Whether or not VMware has decided to store something within that space is not something FreeNAS knows or cares about. You've asked for a 1.5TB volume, FreeNAS created it, and that space is used from the FreeNAS point of view. FreeNAS is not going to open up the file or zvol that you're using for VM storage and try to figure out the contents and report some other "free" space number to you. It'd have to know how to cope with NTFS, HFS, ext3, GrinchFS, etc. And to what point? If you told it to make a volume that takes up most of your disk, then it is doing what you asked.

But the real problem here is that you've filled a pool to 97%. For VM use, a pool should never be more than 60% full, because fragmentation will essentially take you on a performance joyride to suckyland. Read the link.
 

mauroreggio

Dabbler
Joined
Jun 30, 2014
Messages
13
Thank jgreco.
When you say "But the real problem here is that you've filled a pool to 97%" you mean that "When i create the ZVOL on the ZFS volume i used 97% of free space"?
Mauro
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
When you first create ZVOL it reserves specified space, but does not really use it -- that is not a problem. But every ZVOL block ever written by VMWare is counted as used. If you have some snapshots, then you can easily have even more used blocks then the ZVOL size.

The right way to free space is to make VMWare instruct FreeNAS with UNMAP command over iSCSI which blocks of ZVOL are not used at the moment. That is not done automatically, but somehow can be done manually. If those blocks are not held by snapshots -- they will be freed and you should get your space back. The problem is that default iSCSI target in FreeNAS does not support UNMAP. But if you update to 9.2.1.7 or next 9.2.1.8 and switch to experimental iSCSI target -- that should be possible.

Partial alternative solution for the problem, I think, could be to enable compression on ZVOL (if not enabled yet) and make VMWare create and zero-initialize some large thick-provisioned disk on the ZVOL, which then delete without use. Blocks written with zeroes should be perfectly compressed, that should give result close to UNMAP.

But again, check that you do not have snapshots. You will not get your space back if snapshots hold the old data.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Please do not talk about releases that don't exist (and may not ever exist). It just confuses people that think that it might exist. This is why we have added the "latest version" to the top of the forums.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And none of that actually fixes the root issue which I outlined. There are numerous ways to try to mitigate the damage made by poor decisions, as mav@ suggests, but it is far better not to allocate too much space.

When you say "But the real problem here is that you've filled a pool to 97%" you mean that "When i create the ZVOL on the ZFS volume i used 97% of free space"?
Mauro

No. I mean that if you type "zpool list" and "CAP" reads more than 60%, you are saying "Fragmentation, please come and punish me, hurt me, I'm a masochistic type who loves taking a beating." And 60% is pretty aggressive; it might be as little as 10%-20%. To reduce fragmentation effects on a pool with many small block writes, you might actually need to have 15TB of space in order to provide 1.5TB of usable space that is resistant to fragmentation - that's what 10% means.

At 97% you've not only blown past the 60% guideline I've given, but also the 80% past which a pool really shouldn't be filled, and then up past the 95% a pool SHOULD NEVER EVER EXCEED. Once you're getting up there, then ZFS is really going to fall apart for even basic plain file storage.

VM storage is among the most challenging of workloads for a filesystem and you really need to do it right or do something else entirely.
 

mauroreggio

Dabbler
Joined
Jun 30, 2014
Messages
13
No. I mean that if you type "zpool list" and "CAP" reads more than 60%, you are saying "Fragmentation, please come and punish me, hurt me, I'm a masochistic type who loves taking a beating."

:) ... maybe i like!!! :)

I understand jgreco, but like i say in my first post here in this Thread, i'm in this situation because i always watch at VmWare storage status and in this i'm at 50% - 60% of used space.

Thank you very much.
Mauro.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, not VMware's reported free space. I meant exactly what I said: the output from "zpool list". This is the layer at which you can find out the free space ZFS has to work with.

If you create a zvol and create a vmfs datastore on it using 95% of the pool, and the vmfs datastore is empty and reports 0% used, your pool is still catastrophically full. Disregard what VMware is telling you and pay attention to what the filer is telling you.
 
Status
Not open for further replies.
Top