Pool out of space, volume taking too much space

Status
Not open for further replies.

ftipn

Cadet
Joined
Jul 7, 2013
Messages
9
Hello,

Summary
I try to create a new volume, smaller than "free" space shown by zpool list, butI get an error "pool out of space".
What is happening ?

Explanantion
A few weeks ago, I set up a FreeNAS server to store backups.
The server has a single zpool, which has a single vdev raidz2 of 12 3TB disks. So I have about usable 30TB.

Code:
# zpool status -v
  pool: tank1
 state: ONLINE
  scan: scrub in progress since Sun Jul  7 13:12:09 2013
        1.27T scanned out of 19.2T at 212M/s, 24h32m to go
        0 repaired, 6.61% done
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        tank1                                          ONLINE      0    0    0
          raidz2-0                                      ONLINE      0    0    0
            gptid/288e4cef-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/28f02ee5-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2950aaa5-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/29b63d91-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2a18235c-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2a7b7f98-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2ade6ea4-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2b41c4dd-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2ba51ce9-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2c05c849-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2c683e36-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
            gptid/2cca9251-c94e-11e2-a3c8-002590c06ac8  ONLINE      0    0    0
 
errors: No known data errors


I created a few volumes and deleted some. Right now, I have the following volumes and datasets configured:
- one dataset with a quota of 10G
- three volumes of 5 terabytes
Volumes are exported with iSCSI and formatted with NTFS.

Today I want to create another volume but GUI and command line says the pool is "out of space".

zpool list says otherwise:
Code:
# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank1  32.5T  19.2T  13.3T    58%  1.00x  ONLINE  /mnt


But one of my volumes, the one most utilized where most backups are written to show strange used space. It's tank1/veeambackups hereunder:
Code:
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
tank1                 21.9T  2.45T   356K  /mnt/tank1
tank1/be-backups      5.16T  6.13T  1.47T  -
tank1/ftp             6.81G  3.19G  6.81G  /mnt/tank1/ftp
tank1/veeam-archives  5.16T  6.09T  1.51T  -
tank1/veeambackups    11.6T  2.45T  11.6T  -


I don't have any snapshot at all:
Code:
# zfs list -t snapshot
no datasets available


Volume sizes are 5T:
Code:
# zfs get volsize
NAME                  PROPERTY  VALUE    SOURCE
tank1                 volsize   -        -
tank1/be-backups      volsize   5T       local
tank1/ftp             volsize   -        -
tank1/veeam-archives  volsize   5T       local
tank1/veeambackups    volsize   5T       local


I have another 2 year old server running OpenIndiana and serving the same purpose, though much smaller (8 1TB drives), and I don't have this behaviour.

What is happening ? Is this normal ? How can I reclaim that space ?

Thanks for reading.
F.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can you start off with the normal stuff.. FreeNAS version, how you are "trying to create a new volume, smaller than "free" space shown by zpool list, butI get an error "pool out of space", etc.
 

ftipn

Cadet
Joined
Jul 7, 2013
Messages
9
Yes, sorry, I forgot the usual stuff.

FreeNAS version is FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty) .
FreeNAS is installed on a USB flash drive .
Processor : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
Memory : 131028MB

I try to create a new volume with the GUI: storage > active volume > the button "create zfs volume" on the first line (/mnt/tank1).

I can create a volume with any size up to 2425g .
I cannot create a volume with any size higher than 2430g, it fails with an red error message "cannot create 'tank1/foobar': out of space".

The command zpool list shows 13.3T as free on the pool.

The command zfs list and the page active volume respectively show 2.45T and 2.4TiB as available on /mnt/tank1 .

Thank you for your answer.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
so... you cannot create a volume larger than the amount of free space you have ... and you think that's somehow wrong?
 

ftipn

Cadet
Joined
Jul 7, 2013
Messages
9
so... you cannot create a volume larger than the amount of free space you have ... and you think that's somehow wrong?
Please do re-read what I wrote. In case it's too long, here's a summary:
- pool is 30 terabytes
- there are 3 volumes (zvol), each volume is 5 terabytes. By my calculations, used space is 3*5 terabytes = 15 terabytes
- as a consequence, free space should be 30-15 terabytes = 15 terabytes. "zpool list" seems to agree with me.
- i cannot create a volume larger than 2.4 terabytes

So yes, I think there is something wrong here. Don't you ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Please do re-read what I wrote.

No, you re-read what *I* wrote.

You cannot create a zvol larger than the amount of free space you have.

Code:
# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
tank1                21.9T  2.45T  356K  /mnt/tank1

Look. You have 2.45T space free space. Golly gosh you cannot use more than that. !!shock!! It is no more complicated than that as to why you cannot perform the requested operation.

And really it isn't a good idea to fill a ZFS system past 80%, probably less if you're using iSCSI. You're at 58% which is good, but if you actually fill those zvols, you'll be in a world of pain.

Now you want to know where it's all gone. That can be a bit more difficult to figure, especially with zvols.

But first let's get some basics down. You have 12 3TB drives in RAIDZ2. You'd think that'd give you like 36TB of space, but it doesn't, a 3TB drive is ~2.7TB. Discounting two of them for RAIDZ2, that's 2.7TB * 10 = ~27TB.

So you have 21.9 + 2.45 -> ~24.5TB of space and so there's about 2.5TB of space wandering around that I don't see an obvious accounting for.

But I think what's probably getting you is some complications involving ZFS block sizes and allocation policy. If you do a "zfs get logicalused tank1/veeambackups" and a "zfs get used tank1/veeambackups", the second is several times the size of the first, isn't it. And you weren't expecting that... is that what you're talking about?
 

ftipn

Cadet
Joined
Jul 7, 2013
Messages
9
Thanks for confirming what I wrote in the first post. And really, the pool isn't filled past 80%, not even 60%.

YES, YES! I wasn't expecting a zvol of fixed space to take more than that space, and certainly not more than twice that space ! Even with COW it does not make any sense.
If I had snapshots or clones, ok, but I don't have any and never had.

I have other zfs systems and none of them exhibit this behavior.

Property logicalused does not exist. Here are all the properties of the zvol causing problems:
Code:
 zfs get all tank1/veeambackups
NAME                PROPERTY              VALUE                  SOURCE
tank1/veeambackups  type                  volume                -
tank1/veeambackups  creation              Thu May 30 22:53 2013  -
tank1/veeambackups  used                  11.6T                  -                   <<<<<<<
tank1/veeambackups  available            2.45T                  -
tank1/veeambackups  referenced            11.6T                  -
tank1/veeambackups  compressratio        1.00x                  -
tank1/veeambackups  reservation          none                  default
tank1/veeambackups  volsize              5T                    local                   <<<<<<<
tank1/veeambackups  volblocksize          8K                    -
tank1/veeambackups  checksum              on                    default
tank1/veeambackups  compression          lzjb                  inherited from tank1
tank1/veeambackups  readonly              off                    default
tank1/veeambackups  copies                1                      default
tank1/veeambackups  refreservation        5.16T                  local
tank1/veeambackups  primarycache          all                    default
tank1/veeambackups  secondarycache        all                    default
tank1/veeambackups  usedbysnapshots      0                      -
tank1/veeambackups  usedbydataset        11.6T                  -
tank1/veeambackups  usedbychildren        0                      -
tank1/veeambackups  usedbyrefreservation  0                      -
tank1/veeambackups  logbias              latency                default
tank1/veeambackups  dedup                off                    inherited from tank1
tank1/veeambackups  mlslabel                                    -
tank1/veeambackups  sync                  standard              default
tank1/veeambackups  refcompressratio      1.00x                  -
tank1/veeambackups  written              11.6T                  -


Other zvols don't have this behavior, but I've written much less data on those and I created them with CLI to increase volblocksize to 64K.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And compression appears to be on. That muddies up the waters too.
 

ftipn

Cadet
Joined
Jul 7, 2013
Messages
9
Indeed, compression is set to lzjb at the pool level and inherited by the volume.
 

ftipn

Cadet
Joined
Jul 7, 2013
Messages
9
I did some simple tests ...

- create a volume of 5 gigabytes with default settings, write with dd several gigabytes on the volume, see the volume taking the ratio than my original volume (used/volsize = 2.32)
- create a volume of 2 gigabytes with default settings, write with dd, see the exact same thing (ratio used/volsize = 2.32)
- create a volume of 5 gigabytes with volblocksize=16K, write with dd, see nothing really wrong

Disabling compression has no effect.

Is this the expected behavior ?

Test with default settings (volblocksize=8K):
Code:
# zfs create -V 5G tank1/testblock8k
# zfs get all tank1/testblock8k
NAME              PROPERTY              VALUE                  SOURCE
tank1/testblock8k  type                  volume                -
tank1/testblock8k  creation              Wed Jul 10 10:27 2013  -
tank1/testblock8k  used                  5.16G                  -
tank1/testblock8k  available            7.60T                  -
tank1/testblock8k  referenced            165K                  -
tank1/testblock8k  compressratio        1.00x                  -
tank1/testblock8k  reservation          none                  default
tank1/testblock8k  volsize              5G                    local
tank1/testblock8k  volblocksize          8K                    -
tank1/testblock8k  checksum              on                    default
tank1/testblock8k  compression          lzjb                  inherited from tank1
tank1/testblock8k  readonly              off                    default
tank1/testblock8k  copies                1                      default
tank1/testblock8k  refreservation        5.16G                  local
tank1/testblock8k  primarycache          all                    default
tank1/testblock8k  secondarycache        all                    default
tank1/testblock8k  usedbysnapshots      0                      -
tank1/testblock8k  usedbydataset        165K                  -
tank1/testblock8k  usedbychildren        0                      -
tank1/testblock8k  usedbyrefreservation  5.16G                  -
tank1/testblock8k  logbias              latency                default
tank1/testblock8k  dedup                off                    inherited from tank1
tank1/testblock8k  mlslabel                                    -
tank1/testblock8k  sync                  standard              default
tank1/testblock8k  refcompressratio      1.00x                  -
tank1/testblock8k  written              165K                  -
 
# dd if=/dev/urandom bs=8k count=1024000 of=/dev/zvol/tank1/testblock8k
dd: /dev/zvol/tank1/testblock8k: end of device
655361+0 records in
655360+0 records out
5368709120 bytes transferred in 127.191251 secs (42209736 bytes/sec)
 
# zfs get all tank1/testblock8k
NAME              PROPERTY              VALUE                  SOURCE
tank1/testblock8k  type                  volume                -
tank1/testblock8k  creation              Wed Jul 10 10:27 2013  -
tank1/testblock8k  used                  11.6G                  -                          <<<<<<<<<<<<<<<<<<<
tank1/testblock8k  available            7.59T                  -
tank1/testblock8k  referenced            11.6G                  -
tank1/testblock8k  compressratio        1.00x                  -
tank1/testblock8k  reservation          none                  default
tank1/testblock8k  volsize              5G                    local                          <<<<<<<<<<<<<<<<<<<
tank1/testblock8k  volblocksize          8K                    -
tank1/testblock8k  checksum              on                    default
tank1/testblock8k  compression          lzjb                  inherited from tank1
tank1/testblock8k  readonly              off                    default
tank1/testblock8k  copies                1                      default
tank1/testblock8k  refreservation        5.16G                  local
tank1/testblock8k  primarycache          all                    default
tank1/testblock8k  secondarycache        all                    default
tank1/testblock8k  usedbysnapshots      0                      -
tank1/testblock8k  usedbydataset        11.6G                  -
tank1/testblock8k  usedbychildren        0                      -
tank1/testblock8k  usedbyrefreservation  0                      -
tank1/testblock8k  logbias              latency                default
tank1/testblock8k  dedup                off                    inherited from tank1
tank1/testblock8k  mlslabel                                    -
tank1/testblock8k  sync                  standard              default
tank1/testblock8k  refcompressratio      1.00x                  -
tank1/testblock8k  written              11.6G                  -


Code:
# zfs get all tank1/testblock8k
NAME              PROPERTY              VALUE                  SOURCE
tank1/testblock8k  type                  volume                -
tank1/testblock8k  creation              Wed Jul 10 11:18 2013  -
tank1/testblock8k  used                  4.64G                  -                          <<<<<<<<<<<<<<<<<<<
tank1/testblock8k  available            7.60T                  -
tank1/testblock8k  referenced            4.64G                  -
tank1/testblock8k  compressratio        1.00x                  -
tank1/testblock8k  reservation          none                  default
tank1/testblock8k  volsize              2G                    local                          <<<<<<<<<<<<<<<<<<<
tank1/testblock8k  volblocksize          8K                    -
tank1/testblock8k  checksum              on                    default
tank1/testblock8k  compression          lzjb                  inherited from tank1
tank1/testblock8k  readonly              off                    default
tank1/testblock8k  copies                1                      default
tank1/testblock8k  refreservation        2.06G                  local
tank1/testblock8k  primarycache          all                    default
tank1/testblock8k  secondarycache        all                    default
tank1/testblock8k  usedbysnapshots      0                      -
tank1/testblock8k  usedbydataset        4.64G                  -
tank1/testblock8k  usedbychildren        0                      -
tank1/testblock8k  usedbyrefreservation  0                      -
tank1/testblock8k  logbias              latency                default
tank1/testblock8k  dedup                off                    inherited from tank1
tank1/testblock8k  mlslabel                                    -
tank1/testblock8k  sync                  standard              default
tank1/testblock8k  refcompressratio      1.00x                  -
tank1/testblock8k  written              4.64G                  -


Test with volblocksize=16K
Code:
# zfs create -V 5G -o volblocksize=16K tank1/testblock16k
# zfs get all tank1/testblock16k
NAME                PROPERTY              VALUE                  SOURCE
tank1/testblock16k  type                  volume                -
tank1/testblock16k  creation              Wed Jul 10 10:37 2013  -
tank1/testblock16k  used                  5.16G                  -
tank1/testblock16k  available            7.60T                  -
tank1/testblock16k  referenced            165K                  -
tank1/testblock16k  compressratio        1.00x                  -
tank1/testblock16k  reservation          none                  default
tank1/testblock16k  volsize              5G                    local
tank1/testblock16k  volblocksize          16K                    -
tank1/testblock16k  checksum              on                    default
tank1/testblock16k  compression          lzjb                  inherited from tank1
tank1/testblock16k  readonly              off                    default
tank1/testblock16k  copies                1                      default
tank1/testblock16k  refreservation        5.16G                  local
tank1/testblock16k  primarycache          all                    default
tank1/testblock16k  secondarycache        all                    default
tank1/testblock16k  usedbysnapshots      0                      -
tank1/testblock16k  usedbydataset        165K                  -
tank1/testblock16k  usedbychildren        0                      -
tank1/testblock16k  usedbyrefreservation  5.16G                  -
tank1/testblock16k  logbias              latency                default
tank1/testblock16k  dedup                off                    inherited from tank1
tank1/testblock16k  mlslabel                                    -
tank1/testblock16k  sync                  standard              default
tank1/testblock16k  refcompressratio      1.00x                  -
tank1/testblock16k  written              165K                  -
 
# dd if=/dev/urandom bs=8k count=1024000 of=/dev/zvol/tank1/testblock16k
dd: /dev/zvol/tank1/testblock16k: end of device
655361+0 records in
655360+0 records out
5368709120 bytes transferred in 130.353046 secs (41185912 bytes/sec)
 
# zfs get all tank1/testblock16k
NAME                PROPERTY              VALUE                  SOURCE
tank1/testblock16k  type                  volume                -
tank1/testblock16k  creation              Wed Jul 10 10:37 2013  -
tank1/testblock16k  used                  5.80G                  -                          <<<<<<<<<<<<<<<<<<<
tank1/testblock16k  available            7.60T                  -
tank1/testblock16k  referenced            5.80G                  -
tank1/testblock16k  compressratio        1.00x                  -
tank1/testblock16k  reservation          none                  default
tank1/testblock16k  volsize              5G                    local                          <<<<<<<<<<<<<<<<<<<
tank1/testblock16k  volblocksize          16K                    -
tank1/testblock16k  checksum              on                    default
tank1/testblock16k  compression          lzjb                  inherited from tank1
tank1/testblock16k  readonly              off                    default
tank1/testblock16k  copies                1                      default
tank1/testblock16k  refreservation        5.16G                  local
tank1/testblock16k  primarycache          all                    default
tank1/testblock16k  secondarycache        all                    default
tank1/testblock16k  usedbysnapshots      0                      -
tank1/testblock16k  usedbydataset        5.80G                  -
tank1/testblock16k  usedbychildren        0                      -
tank1/testblock16k  usedbyrefreservation  0                      -
tank1/testblock16k  logbias              latency                default
tank1/testblock16k  dedup                off                    inherited from tank1
tank1/testblock16k  mlslabel                                    -
tank1/testblock16k  sync                  standard              default
tank1/testblock16k  refcompressratio      1.00x                  -
tank1/testblock16k  written              5.80G                  -
 

ftipn

Cadet
Joined
Jul 7, 2013
Messages
9
Same test with OI151a7 and it works as expected, a zvol of 1 gigabyte takes 1.03 gigabyte (volblocksize=8K), and it does not change as you write to it.
 
Status
Not open for further replies.
Top