Space utilization on thin-provisioned device

Status
Not open for further replies.

qlx309

Dabbler
Joined
Nov 8, 2017
Messages
10
Hi All

New to forum so first post :) Used FreeBSD before but first time using FreeNAS.

I just started using FreeNAS 11 for my iSCSI shared storage for my vSphere 6.5 cluster. I have the following volumes configured in FreeNAS:

RAID 10 comprised of 4 Samsung SM863 480GB SSDs
RAID 1 comprised of 2 Samsung Pro 850 512GB SSDs
Single Western Digital 4TB SATA drive

Since I want to use VAAI with vSphere I created all the above as a zvol and ticked the sparse volume option.

It seems to be working well except I get these warning messages in the vmkernel.log on each of the ESXi hosts:

2017-11-08T21:56:44.833Z cpu15:66376)WARNING: ScsiDeviceIO: 2728: Space utilization on thin-provisioned device naa.6589cfc0000006f22a5c1eb41598028b exceeded configured threshold

This error seems to occur every 15min.

I also get these every 5min:

2017-11-08T21:56:44.833Z cpu15:66376)NMP: nmp_ThrottleLogForDevice:3617: Cmd 0x2a (0x439501012ec0, 129131) to dev "naa.6589cfc0000006f22a5c1eb41598028b" on path "vmhba64:C0:T1:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x38 0x7. Act:NONE

If I run zfs list I have loads of free space:


Code:
NAME  USED  AVAIL  REFER  MOUNTPOINT
freenas-boot  1.13G  114G  176K  none
freenas-boot/.system  28.7M  114G  176K  legacy
freenas-boot/.system/configs-2f1f44bcdd194541bc55d43e518f686d  764K  114G  764K  legacy
freenas-boot/.system/cores  636K  114G  636K  legacy
freenas-boot/.system/rrd-2f1f44bcdd194541bc55d43e518f686d  26.2M  114G  26.2M  legacy
freenas-boot/.system/samba4  204K  114G  204K  legacy
freenas-boot/.system/syslog-2f1f44bcdd194541bc55d43e518f686d  756K  114G  756K  legacy
freenas-boot/ROOT  1.09G  114G  136K  none
freenas-boot/ROOT/Initial-Install  8K  114G  1.08G  legacy
freenas-boot/ROOT/default  1.09G  114G  1.08G  legacy
freenas-boot/grub  7.82M  114G  7.82M  legacy
pro512  155G  302G  88K  /mnt/pro512
pro512/iscsi  155G  302G  155G  -
sm863  254G  606G  88K  /mnt/sm863
sm863/iscsi  254G  606G  254G  -
sm863/jails  88K  606G  88K  /mnt/sm863/jails
wd4tb  524G  3.00T  88K  /mnt/wd4tb
wd4tb/iscsi  524G  3.00T  524G  -



I have tried changing the "Pool Available threshold" in the global configuration and/or changing it on each extent to a really low or high value or well as setting it to 50% or zero but I still keep getting these warnings despite having lots of free disk space in each volume in FreeNAS.

What am I missing here or doing wrong and how can I stop so many warnings being logged all the time?

Thanks for any help! :)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
First of all, Welcome to FreeNAS Forums.

So you are lacking some information which is established in our forum rules such as you system makeup and the configuration you are using. Since youa re using ESXi then you need to add additioanl information such as how your VM is configured.

You are using terms RAID10 and RAID1 but ZFS does not use these formats so if you mean a ZFS format, please use that terminology to ensure there is no confusion. By the way, I don't make many assumptions as it's a bad way to help troubleshoot a problem.

Based on the information you provided you have created a thin-provisioned drive vice pass-through (pass-through is the highly recommended way to go) and I'll take a stab here and say you likely over-provisioned it. In other words you created a [example] 200GB thin drive but only have 100GB of free space. Why do I say this? Because your error messages are from ESXi not FreeNAS.

Also within our forum rules it states to take a minute and try an internet search for your issue. I took the following line from your error message "Space utilization on thin-provisioned device exceeded configured threshold" and found this on the first hit from (after the ad) on Google.

I'm not trying to be a jerk here but rather point out that you could have found the solution to your problem yesterday if you had only done a quick search before posting.

Additionally if you do plan to run iSCSI on FreeNAS, ensure you have lots of RAM for the FreeNAS VM and lots of storage space.

If you do have further FreeNAS questions or how to use FreeNAS on ESXi, please feel free to ask but try a search first. I have a 19+ page thread here talking about my endevours with ESXi and FreeNAS, lots of good info there but it can be a long read.

Good Luck and hope things work out the way you desire.
 

qlx309

Dabbler
Joined
Nov 8, 2017
Messages
10
Hey and thanks for the welcome! Apologies for not providing enough info in my OP. I'll try harder now :)

Firstly my config:

FreeNAS 11 (I have this installed on bare metal - no VM):
  • Supermicro X10SL7-F
  • Xeon(R) CPU E3-1230 v3 @ 3.30GHz
  • 32GB DDR3 ECC RAM
  • 4 x Samsung Enterprise 480GB SATA SSDs
  • 2 z Samsung Pro 840/850 512GB SATA SSDs
  • 2 x Samsung Pro 840 120GB SATA SSDs (boot)
  • 500W Seasonic PSU
  • Mellanox ConnectX2 - dual port

ESXi 6.5 (two of these):
  • Supermicro 5028D-TN4T
  • 128GB RAM
  • Xeon D 1541 8 core
  • Mellanox ConnectX2 single port

So when I created my volumes using ZFS in FreeNAS I used striped mirrors vdevs for my 4 Samsung SM863 drives and mirrored vdevs for the Samsung 840/850 drives:

Code:

  pool: pro512
state: ONLINE
  scan: scrub repaired 0 in 0h11m with 0 errors on Sun Nov  5 15:18:38 2017
config:

  NAME  STATE  READ WRITE CKSUM
  pro512  ONLINE  0  0  0
  mirror-0  ONLINE  0  0  0
  gptid/1d2e496c-c1fe-11e7-b597-00259086cd5c  ONLINE  0  0  0
  gptid/1d937bbb-c1fe-11e7-b597-00259086cd5c  ONLINE  0  0  0

errors: No known data errors

  pool: sm863
state: ONLINE
  scan: none requested
config:

  NAME  STATE  READ WRITE CKSUM
  sm863  ONLINE  0  0  0
  mirror-0  ONLINE  0  0  0
  gptid/7fdc5a05-c1ab-11e7-b578-00259086cd5c  ONLINE  0  0  0
  gptid/800f475b-c1ab-11e7-b578-00259086cd5c  ONLINE  0  0  0
  mirror-1  ONLINE  0  0  0
  gptid/9fbcab35-c1ab-11e7-b578-00259086cd5c  ONLINE  0  0  0
  gptid/9ff0ba2d-c1ab-11e7-b578-00259086cd5c  ONLINE  0  0  0


Code:
root@san:/var/log # zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
pro512  137G  320G  88K  /mnt/pro512
pro512/iscsi  137G  320G  137G  -
sm863  333G  527G  88K  /mnt/sm863
sm863/iscsi  333G  527G  333G  -


I should mention again that FreeNAS is installed on bare metal. Although I run vSphere 6.5 my storage is not virtualised.

So for the SM863 volume I created it could have 860GB of space but when I created the zvol I set the size as 1.5TB and ticked the sparse volume which I needed for VAAI support. Why did I do it this way (over provision)? Because I get 2.00x compression with my VMs and if I present an 860GB LUN to ESXi then I can only use that amount of space (I have 1TB of VMs in total) but with compression the VMs only take up 500GB on the ZFS volumes so it allows me to get more VMs running in the same amount of space.

And yes, believe me I Googled this for hours yesterday and did read the VMware KB but it still wasn't quite adding up for me (hence this post). I am trying to find out where and/or how I can stop the excessive logging in the ESXi vmkernel.logs. I thought the thresholds you set/configured in FreeNAS had/has something to do with this logging in ESXi? Or have I got this completely wrong? Or do I just need to ignore the warnings? I guess what I am trying to ask is: Is this an issue or just something I need to live with/ignore?

I think I have enough RAM in FreeNAS (32GB) considering I am not using dedupe (but I am using lz4 compression).

Thanks for the help. Hope I haven't rambled on for too long here and that I make a bit of sense! :cool: I'll hunt down your 19 page forum post...[/code][/code]
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Thanks for posting the aditional information and clarification.

I'm not going to be able to help you with this problem as I cannot replicate this issue with the hardware I have available to me, and I'm only speculating at this point on how the ZFS Sparse works. I've read a few things about it but you also have ESXi being the one tossing you the error messages which is tossing me for a loop.

Here is what I think I understand right now...

1) FreeNAS is on bare metal and you have a set of four SSDs over-provisioned.
2) FreeNAS is not giving you a failure message.
3) The FreeNAS share is being used by ESXi as an iSCSI share.
4) ESXi is throwing error messages that you are out of storage space on an over-provisioned drive.

So what I don't understand is this... The error messages from ESXi are specific to an over-provisioned VM drive on ESXi, not a thin drive on FreeNAS. Well that is what I think.

Question: What is this device? (look this up so you know exactly which device this is) naa.6589cfc0000006f22a5c1eb41598028b
 

qlx309

Dabbler
Joined
Nov 8, 2017
Messages
10
1) FreeNAS is on bare metal and you have a set of four SSDs over-provisioned.
2) FreeNAS is not giving you a failure message.
3) The FreeNAS share is being used by ESXi as an iSCSI share.
4) ESXi is throwing error messages that you are out of storage space on an over-provisioned drive.

So what I don't understand is this... The error messages from ESXi are specific to an over-provisioned VM drive on ESXi, not a thin drive on FreeNAS. Well that is what I think.

Question: What is this device? (look this up so you know exactly which device this is) naa.6589cfc0000006f22a5c1eb41598028b

Everything you have said (points 1 to 4) are correct. To expand, my one pool is 860GB but I have set the size to 1.5TB in the zvol and the other one is 470GB but the size is set to 1TB in the zvol. I am getting about 1.90x for the compression on the sm863 pool and 1.66x on the pro512 pool.

I'm guessing that ESXi is being informed via VAAI/FreeNAS that there is a space issue due to the thin drive. I'm just wondering how you control the warnings and level of logging.

Device naa.6589cfc0000006f22a5c1eb41598028b is the sm863 pool which is the one that has the 4 Samsung 480GB SSDs setup in a striped mirror.

I should mention that the performance I am getting from this setup with FreeNAS is simply fantastic! The only concern/confusion I have is with the volume of warnings being logged by ESXi in vmkernel.log re the thin disk space utilisation.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
What I think might be happening (just pure speculation) is your vdev is trying to expand. Look at it this way and maybe this will make sense. Your drive is formated to 1.5TB and FreeNAS thinks it has that much space. So it starts to write data and of course it would like to keep things contiguious so it keeps writing but is not using fragmented areas but rather nice new clean areas and this causes the ZVOL to try to expand but it can't because it likely has done all the expanding it can do because you really only have 860GB.

I wish I had something wise to say to you but I don't. At this point in time only ESXi is giving you error messages, not FreeNAS. I don't understand how you are using this iSCSI device within ESXi and I'm thinking it could be the issue. An experiment that you could do would be to create the same iSCSI share on a large hard drive and move your VMs there and test it out both with the sparse parameter and without if possible. The goal here is to figure out if the error is FreeNAS or ESXi.

I think you have some troubleshooting in your future or you will just live with the warning messages.

Good luck and if you figure it out, please update this thread, I'd like to know what the issue was.
 

qlx309

Dabbler
Joined
Nov 8, 2017
Messages
10
Hey, thanks for the update.

I already tried creating a new iSCSI share on a large hard drive. So I used a single 4TB drive and created a ZFS pool on this drive. When I created the zvol for iSCSI, I also created it with the 4TB size although I have only tested with the sparse setting (as I want to make use of VAAI) and I still got the warnings in the ESXi log even when the utilisation on the volume was under 20%!

Yeah, maybe I just need to live with the warning messages and ignore them. I just find it odd that there's no way to configure this.

Regarding the thresholds settings you can set per extent, how do these work? Do they just give you a warning in the FreeNAS webgui when the threshold is passed?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Regarding the thresholds settings you can set per extent, how do these work? Do they just give you a warning in the FreeNAS webgui when the threshold is passed?
I'm new to ESXi and only have used iSCSI once so I can't really help in this respect.

I already tried creating a new iSCSI share on a large hard drive. So I used a single 4TB drive and created a ZFS pool on this drive. When I created the zvol for iSCSI, I also created it with the 4TB size although I have only tested with the sparse setting (as I want to make use of VAAI) and I still got the warnings in the ESXi log even when the utilisation on the volume was under 20%!
Maybe there is something wrong in your ESXi setup.

In your ESXi VM, the hard virtual drive you have setup is set to thin based on the error message. What happens if you change it to Thick provisioning?
 
Status
Not open for further replies.
Top