ZFS zvol taking more space than it's max size?

Status
Not open for further replies.

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Hello guys... I have an zvol in one of my ZFS pools which allows me to serve my XenServer virtual machines. I'm using iSCSI on this guys.

The question is: why it takes 8.41T when I just reserved 5.0T for the zvol? What I'm missing?

zfs get volsize storagepool1/lvm1
NAME PROPERTY VALUE SOURCE
storagepool1/lvm1 volsize 5T local

storage# zfs list storagepool1/lvm1
NAME USED AVAIL REFER MOUNTPOINT
storagepool1/lvm1 8.41T 6.86T 8.41T -

Thanks in advance,
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
The question is: why it takes 8.41T when I just reserved 5.0T for the zvol? What I'm missing?
Two questions:
  1. How did you "reserve" the space? Did you set reservation, refreservation, quota or refquota? Or some combination?
  2. Do you use snapshots?
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
I think I've misused the term reserve. I just created a Zvol with 5.0TB and I don't use the snapshot feature.

Thanks in advance,
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not too sure exactly what is going on here for your precise situation. But my guess is your zvol block size has some bearing on what is going on. Check out https://bugs.freenas.org/issues/2383 .

As you can see, its possible for 100GB of test data with a 512byte block will take up 2.31TB(yes, TB) of disk space. It has to do with how ZFS works and it isn't a bug from what I've read about ZFS' file structure.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Hmm... This sucks. There's a way to reclaim this space? Because with this it would be better to have a simply RAID10-like (various RAID1 vdevs in one zpool) with ZFS instead of RAID-Z2.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
I'm not too sure exactly what is going on here for your precise situation. But my guess is your zvol block size has some bearing on what is going on. Check out https://bugs.freenas.org/issues/2383 .

As you can see, its possible for 100GB of test data with a 512byte block will take up 2.31TB(yes, TB) of disk space. It has to do with how ZFS works and it isn't a bug from what I've read about ZFS' file structure.


Cyberjock, I can confirm, it's the exactly same situation...

Any ideia or suggestion on how to fix this? We have 12TB free on the zpool, so moving 3TB of data is easy.

Should we abandon ZVOL wheres possible?
Recreate the ZVOL with another block size?

We have a 10x3TB Seagate SATA Disks in RAID-Z2, the ZVOL act as iSCSI with LVM from XenServer.

Thanks in advance,

PS: Some data...
Code:
storagepool1/lvm1            type                  volume                            -
storagepool1/lvm1            creation              Fri Aug 30 20:57 2013            -
storagepool1/lvm1            used                  8.42T                            -
storagepool1/lvm1            available            12.0T                            -
storagepool1/lvm1            referenced            8.42T                            -
storagepool1/lvm1            compressratio        1.00x                            -
storagepool1/lvm1            reservation          none                              default
storagepool1/lvm1            volsize              5T                                local
storagepool1/lvm1            volblocksize          8K                                -
storagepool1/lvm1            checksum              on                                default
storagepool1/lvm1            compression          off                              default
storagepool1/lvm1            readonly              off                              default
storagepool1/lvm1            copies                1                                default
storagepool1/lvm1            refreservation        5.16T                            local
storagepool1/lvm1            primarycache          all                              default
storagepool1/lvm1            secondarycache        all                              default
storagepool1/lvm1            usedbysnapshots      0                                -
storagepool1/lvm1            usedbydataset        8.42T                            -
storagepool1/lvm1            usedbychildren        0                                -
storagepool1/lvm1            usedbyrefreservation  0                                -
storagepool1/lvm1            logbias              latency                          default
storagepool1/lvm1            dedup                off                              inherited from storagepool1
storagepool1/lvm1            mlslabel                                                -
storagepool1/lvm1            sync                  always                            inherited from storagepool1
storagepool1/lvm1            refcompressratio      1.00x                            -
storagepool1/lvm1            written              8.42T                            -
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Cyberjock, I can confirm, it's the exactly same situation...

Any ideia or suggestion on how to fix this? We have 12TB free on the zpool, so moving 3TB of data is easy.

Should we abandon ZVOL wheres possible?
Recreate the ZVOL with another block size?
There are tradeoffs with block sizes in performance(both I/Os and throughput, and they don't both go up at the same time) versus total disk used.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Actually, nobody can give you that advice. What you should do is read up on all this stuff and figure out whats best for your situation.There are tradeoffs with block sizes in performance(both I/Os and throughput, and they don't both go up at the same time) versus total disk used.


I'm just throwing options in the thread... Since you're much more experienced than me, your opinion counts a lot and I was expecting to see your ideas.

Form what I've read increasing the block size would waste more space, but it's ok, since the ZVOLs are hosting LVM-over-iSCSI. From the I/O perspective I was unable to find anything.

Thanks in advance,
 
Status
Not open for further replies.
Top