Cannot create snapshot: out of space

Status
Not open for further replies.
Joined
Jun 17, 2013
Messages
36
I'm confused.

I have a pool of raptor drives called 'faststore'.

Faststore appears to have ~300 GB available, but autosnap (and the command line) says I'm out of space.

Log entry:

Aug 25 20:02:05 nas2 autosnap.py: [tools.autosnap:240] Failed to create snapshot 'faststore@auto-20130825.2002-2d': cannot create snapshot 'faststore/<redacted>exch1-mailstore@auto-20130825.2002-2d': out of space no snapshots were created
Here's what my pool looks like:
Code:
[root@nas2] ~# zfs list -t all -ro name,space,compressratio,quota,refquota,reservation,refreservation faststore
NAME                          NAME                          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  RATIO  QUOTA  REFQUOTA  RESERV  REFRESERV
faststore                     faststore                      295G   800G         0     31K              0       800G  1.13x   none      none    none       none
faststore/<redacted>exch1-mailstore  faststore/<redacted>exch1-mailstore   432G   774G         0    636G           137G          0  1.13x      -         -    none       774G
faststore/<redacted>exch1-tlog       faststore/<redacted>exch1-tlog        316G  25.8G         0   5.26G          20.5G          0  1.06x      -         -    none      25.8G
[root@nas2] ~# 

Code:
[root@nas2] ~# zfs snapshot -r faststore@test
cannot create snapshot 'faststore/<redacted>exch1-mailstore@test': out of space
no snapshots were created
[root@nas2]

FreeNAS-9.1.0-RELEASE-x64 (dff7d13)
I've got to be missing something. Can someone shed some light on this?
And just so I don't have to explain my apparently lack of Exchange-fu--I'm in the middle of shuffling data. The transaction log will be elsewhere soon... ;)
Thanks,
-A
 

Kostya Berger

Dabbler
Joined
Sep 10, 2013
Messages
10
The same here: 300G free space in the pool, 2 datesets 300G each, out of space creating a snapshot.

EDIT: I only tried to create a recursive snapshot of one of these.
 

Kostya Berger

Dabbler
Joined
Sep 10, 2013
Messages
10
And me, I'm using STABLE release 8.3.2 and having the same problem as the OP author. So what? Nobody has any ideas nor ever heard of anything like this?

Let's try to decide at least this: whether this is a bug in FreeNAS or ZFS (less likely), or a failure to take into account some important points about ZFS pools/datasets creation...
OK, here's what Oracle docs say about snapshots:
Snapshots use no separate backing store. Snapshots consume disk space directly from the same storage pool as the file system or volume from which they were created.

...When a snapshot is created, its disk space is initially shared between the snapshot and the file system, and possibly with previous snapshots.
Could it be, then, that something was wrong with the original dataset creation?
To that I can answer, that I created my datasets using FreeNAS GUI, not command line. Another point is that a dataset I'm trying to make a snapshot of is CIFS shared. Could THAT be a problem?

Anyway, here's what I found in FreeNAS docs:
the periodic snapshot task should be created and at least one snapshot should exist before creating the CIFS share. If you created the CIFS share first, restart the CIFS service in Services → Control Services.
Will try this and see if this helps when I get to my server.
 

tmueko

Explorer
Joined
Jun 5, 2012
Messages
82
Same Problem here on a fresh 9.1.1 install with new created zpool. Server was running fine on 8.x with 512 byte blocks on zpool (without forcing 4k blocks). I think I was hit by 4k-Bug!?

Sep 17 08:36:02 tillmann autosnap.py: [tools.autosnap:57] Popen()ing: /sbin/zfs snapshot -r daten@auto-20130917.0836-4w
Sep 17 08:36:02 tillmann autosnap.py: [tools.autosnap:247] Failed to create snapshot 'daten@auto-20130917.0836-4w': cannot create snapshot 'daten/iscsivol02@auto-20130917.0836-4w': out of space no snapshots were created
Sep 17 08:37:01 tillmann autosnap.py: [tools.autosnap:57] Popen()ing: /sbin/zfs snapshot -r daten@auto-20130917.0837-4w
Sep 17 08:37:01 tillmann autosnap.py: [tools.autosnap:247] Failed to create snapshot 'daten@auto-20130917.0837-4w': cannot create snapshot 'daten/iscsivol02@auto-20130917.0837-4w': out of space no snapshots were created
[root@tillmann] ~# zfs list -t all -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
daten 504G 2.06T 0 230K 0 2.06T
daten/iscsivol01 1.22T 1.03T 0 315G 741G 0
daten/iscsivol02 1.00T 1.03T 0 534G 523G 0
[root@tillmann] ~# zpool list -v
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
daten 3.25T 1.04T 2.21T 31% 1.00x ONLINE /mnt
raidz1 3.25T 1.04T 2.21T -
gptid/b684c577-1eb4-11e3-9a9c-003048f17736 - - - -
gptid/b6b1d9e3-1eb4-11e3-9a9c-003048f17736 - - - -
gptid/b6dfe65e-1eb4-11e3-9a9c-003048f17736 - - - -
gptid/b71110e5-1eb4-11e3-9a9c-003048f17736 - - - -
gptid/b74140dd-1eb4-11e3-9a9c-003048f17736 - - - -
gptid/b772478d-1eb4-11e3-9a9c-003048f17736 - - - -
cache - - - - - -
gptid/b7d6b785-1eb4-11e3-9a9c-003048f17736 112G 112G 8M -
[root@tillmann] ~# zpool status -v
pool: daten
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
daten ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/b684c577-1eb4-11e3-9a9c-003048f17736 ONLINE 0 0 0
gptid/b6b1d9e3-1eb4-11e3-9a9c-003048f17736 ONLINE 0 0 0
gptid/b6dfe65e-1eb4-11e3-9a9c-003048f17736 ONLINE 0 0 0
gptid/b71110e5-1eb4-11e3-9a9c-003048f17736 ONLINE 0 0 0
gptid/b74140dd-1eb4-11e3-9a9c-003048f17736 ONLINE 0 0 0
gptid/b772478d-1eb4-11e3-9a9c-003048f17736 ONLINE 0 0 0
cache
gptid/b7d6b785-1eb4-11e3-9a9c-003048f17736 ONLINE 0 0 0
spares
gptid/b7a76ed0-1eb4-11e3-9a9c-003048f17736 AVAIL
errors: No known data errors
[root@tillmann] ~# zdb
daten:
version: 5000
name: 'daten'
state: 0
txg: 158
pool_guid: 8567122808326556791
hostid: 3958502897
hostname: ''
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 8567122808326556791
create_txg: 4
children[0]:
type: 'raidz'
id: 0
guid: 4069578498843901188
nparity: 1
metaslab_array: 36
metaslab_shift: 35
ashift: 12
asize: 3581909925888
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 10973222563753398512
path: '/dev/gptid/b684c577-1eb4-11e3-9a9c-003048f17736'
phys_path: '/dev/gptid/b684c577-1eb4-11e3-9a9c-003048f17736'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 18006804593293635281
path: '/dev/gptid/b6b1d9e3-1eb4-11e3-9a9c-003048f17736'
phys_path: '/dev/gptid/b6b1d9e3-1eb4-11e3-9a9c-003048f17736'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 5088992460657020801
path: '/dev/gptid/b6dfe65e-1eb4-11e3-9a9c-003048f17736'
phys_path: '/dev/gptid/b6dfe65e-1eb4-11e3-9a9c-003048f17736'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 11262657449395742850
path: '/dev/gptid/b71110e5-1eb4-11e3-9a9c-003048f17736'
phys_path: '/dev/gptid/b71110e5-1eb4-11e3-9a9c-003048f17736'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 10791002681225119520
path: '/dev/gptid/b74140dd-1eb4-11e3-9a9c-003048f17736'
phys_path: '/dev/gptid/b74140dd-1eb4-11e3-9a9c-003048f17736'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 2817577784752652424
path: '/dev/gptid/b772478d-1eb4-11e3-9a9c-003048f17736'
phys_path: '/dev/gptid/b772478d-1eb4-11e3-9a9c-003048f17736'
whole_disk: 1
create_txg: 4
features_for_read:

[root@tillmann] ~# zfs list -o name,compression,compressratio,volblocksize
NAME COMPRESS RATIO VOLBLOCK
daten lz4 1.35x -
daten/iscsivol01 lz4 1.36x 32K
daten/iscsivol02 lz4 1.34x 32K
We will try to create the pool using 512Byte alignment...
 

tmueko

Explorer
Joined
Jun 5, 2012
Messages
82
[root@tillmann] ~# zfs snapshot daten/iscsivol01@test
[root@tillmann] ~# zfs snapshot daten/iscsivol02@test
cannot create snapshot 'daten/iscsivol02@test': out of space
[root@tillmann] ~# zfs list -o space -t all
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
daten 187G 2.37T 0 230K 0 2.37T
daten/iscsivol01 1.21T 1.34T 2.50M 318G 1.03T 0
daten/iscsivol01@test - 2.50M - - - -
daten/iscsivol02 709G 1.03T 0 534G 523G 0
[root@tillmann] ~# zfs get all
NAME PROPERTY VALUE SOURCE
daten type filesystem -
daten creation Mon Sep 16 11:45 2013 -
daten used 2.37T -
daten available 187G -
daten referenced 230K -
daten compressratio 1.36x -
daten mounted yes -
daten quota none default
daten reservation none default
daten recordsize 128K default
daten mountpoint /mnt/daten default
daten sharenfs off default
daten checksum on default
daten compression lz4 local
daten atime off local
daten devices on default
daten exec on default
daten setuid on default
daten readonly off default
daten jailed off default
daten snapdir hidden default
daten aclmode passthrough local
daten aclinherit passthrough local
daten canmount on default
daten xattr off temporary
daten copies 1 default
daten version 5 -
daten utf8only off -
daten normalization none -
daten casesensitivity sensitive -
daten vscan off default
daten nbmand off default
daten sharesmb off default
daten refquota none local
daten refreservation none local
daten primarycache all default
daten secondarycache all default
daten usedbysnapshots 0 -
daten usedbydataset 230K -
daten usedbychildren 2.37T -
daten usedbyrefreservation 0 -
daten logbias latency default
daten dedup off default
daten mlslabel -
daten sync standard default
daten refcompressratio 1.00x -
daten written 230K -
daten logicalused 1.05T -
daten logicalreferenced 15.5K -
daten/iscsivol01 type volume -
daten/iscsivol01 creation Mon Sep 16 11:53 2013 -
daten/iscsivol01 used 1.34T -
daten/iscsivol01 available 1.21T -
daten/iscsivol01 referenced 319G -
daten/iscsivol01 compressratio 1.38x -
daten/iscsivol01 reservation none default
daten/iscsivol01 volsize 1T local
daten/iscsivol01 volblocksize 32K -
daten/iscsivol01 checksum on default
daten/iscsivol01 compression lz4 inherited from daten
daten/iscsivol01 readonly off default
daten/iscsivol01 copies 1 default
daten/iscsivol01 refreservation 1.03T local
daten/iscsivol01 primarycache all default
daten/iscsivol01 secondarycache all default
daten/iscsivol01 usedbysnapshots 7.85M -
daten/iscsivol01 usedbydataset 319G -
daten/iscsivol01 usedbychildren 0 -
daten/iscsivol01 usedbyrefreservation 1.03T -
daten/iscsivol01 logbias latency default
daten/iscsivol01 dedup off default
daten/iscsivol01 mlslabel -
daten/iscsivol01 sync standard default
daten/iscsivol01 refcompressratio 1.38x -
daten/iscsivol01 written 1.42G -
daten/iscsivol01 logicalused 406G -
daten/iscsivol01 logicalreferenced 406G -
daten/iscsivol01@test type snapshot -
daten/iscsivol01@test creation Tue Sep 17 9:34 2013 -
daten/iscsivol01@test used 7.85M -
daten/iscsivol01@test referenced 318G -
daten/iscsivol01@test compressratio 1.38x -
daten/iscsivol01@test devices on default
daten/iscsivol01@test exec on default
daten/iscsivol01@test setuid on default
daten/iscsivol01@test xattr on default
daten/iscsivol01@test nbmand off default
daten/iscsivol01@test primarycache all default
daten/iscsivol01@test secondarycache all default
daten/iscsivol01@test defer_destroy off -
daten/iscsivol01@test userrefs 0 -
daten/iscsivol01@test mlslabel -
daten/iscsivol01@test refcompressratio 1.38x -
daten/iscsivol01@test written 318G -
daten/iscsivol01@test clones -
daten/iscsivol01@test logicalused 0 -
daten/iscsivol01@test logicalreferenced 405G -
daten/iscsivol02 type volume -
daten/iscsivol02 creation Mon Sep 16 11:53 2013 -
daten/iscsivol02 used 1.03T -
daten/iscsivol02 available 709G -
daten/iscsivol02 referenced 534G -
daten/iscsivol02 compressratio 1.34x -
daten/iscsivol02 reservation none default
daten/iscsivol02 volsize 1T local
daten/iscsivol02 volblocksize 32K -
daten/iscsivol02 checksum on default
daten/iscsivol02 compression lz4 inherited from daten
daten/iscsivol02 readonly off default
daten/iscsivol02 copies 1 default
daten/iscsivol02 refreservation 1.03T local
daten/iscsivol02 primarycache all default
daten/iscsivol02 secondarycache all default
daten/iscsivol02 usedbysnapshots 0 -
daten/iscsivol02 usedbydataset 534G -
daten/iscsivol02 usedbychildren 0 -
daten/iscsivol02 usedbyrefreservation 523G -
daten/iscsivol02 logbias latency default
daten/iscsivol02 dedup off default
daten/iscsivol02 mlslabel -
daten/iscsivol02 sync standard default
daten/iscsivol02 refcompressratio 1.34x -
daten/iscsivol02 written 534G -
daten/iscsivol02 logicalused 664G -
daten/iscsivol02 logicalreferenced 664G -
 

Kostya Berger

Dabbler
Joined
Sep 10, 2013
Messages
10
I'm running 8.3.2 and have this problem on a fresh WD 1Tb RED HDD mirror:
zfs list -o space -t all
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
zroot 313G 600G 0 160K 0 600G
zroot/WORK-NEW 255G 300G 0 45.4G 255G 0
zroot/Work-Foto-Video 233G 300G 0 66.9G 233G 0
 

tmueko

Explorer
Joined
Jun 5, 2012
Messages
82
It is interesting what "zdb | grep ashift" gives you. Is it 12 odr 9?
 

tmueko

Explorer
Joined
Jun 5, 2012
Messages
82
OK, this is a (forced?) 4k alignment like my setup. this "could" be the problem.
 

Kostya Berger

Dabbler
Joined
Sep 10, 2013
Messages
10
Well, I haven't got into this thing yet... But mine are WD RED 1Tb drives and nowhere does it say they're "advanced format" drives...
Thankfully, I've got another backup drive in the system... and I was planning the backup copying anyway. Seems like it's time to start? I assume we can't get rid of that 4k allignement without destroying all the data??
 

SkyMonkey

Contributor
Joined
Mar 13, 2013
Messages
102
Do you have quotas or reserved space setup on the dataset? I noticed that in certain configurations (I can't remember exactly which off the top of my head, but I suspect it's when you set both quota and reserved space equal), snapshots failed with an out of space message.
 

tmueko

Explorer
Joined
Jun 5, 2012
Messages
82
Do you have quotas or reserved space setup on the dataset? I noticed that in certain configurations (I can't remember exactly which off the top of my head, but I suspect it's when you set both quota and reserved space equal), snapshots failed with an out of space message.

no Quotas:

daten quota none default
 

Kostya Berger

Dabbler
Joined
Sep 10, 2013
Messages
10
Do you have quotas or reserved space setup on the dataset? I noticed that in certain configurations (I can't remember exactly which off the top of my head, but I suspect it's when you set both quota and reserved space equal), snapshots failed with an out of space message.
Yes I've set both and equal. Why, this thought came to me , too... I'll try to unset them and see what happens.
 
Status
Not open for further replies.
Top