11.3 - "Phantom" snapshot task (auto-*)

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Hi all,

I have a snapshot task defined for all the VM virtual disks that are stored on my SSD pool (mirror). See attached screenshot for the configuration in the UI, please.

As a result I get:
Code:
zfs list -t snap -r ssd/vms
[...]
ssd/vms/windows-pmh-disk0@auto-2020-02-04_20-00  5.08M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-04_21-00  4.92M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-04_22-00  4.80M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-04_23-00  4.91M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_00-00  6.12M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_01-00  6.12M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_02-00  5.22M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_03-00  7.04M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_04-00  1.96M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-20200205.0415-2w  1.95M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_05-00  5.18M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_06-00  5.06M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_07-00  4.75M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_08-00  5.03M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_09-00  36.5M      -  29.0G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_10-00  5.43M      -  29.1G  -
ssd/vms/windows-pmh-disk0@auto-2020-02-05_11-00  2.26M      -  29.1G  -


Sor far so good. But wait? What is this doing in there?
ssd/vms/windows-pmh-disk0@auto-20200205.0415-2w 1.95M - 29.0G -

And I even get one of those each day at 04:15 with a retention of two weeks.

I looked in the sqlite DB but did not find any task that might be "left over" and not shown in the UI. So who is doing snapshots of my datasets and why?

Thanks,
Patrick

Bildschirmfoto 2020-02-05 um 11.16.59.png
 
Last edited:

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Hi,

I've set two periodic snapshots for one of my pools. One is daily with a week lifespan and other is weekly with 4 weeks lifespan.
1581166091154.png


The daily one ran last night at 1AM as scheduled but for whatever reason there is a second "auto-" snapshot executed at 4:15 (??).
1581166282253.png

Code:
storage/archive 6.09T   34.9T   192K    /mnt/storage/archive
storage/archive@daily-2020-02-08_01-00  0       -       192K    -
storage/archive@auto-20200208.0415-1w   0       -       192K    -
storage/backup  777G    34.9T   777G    /mnt/storage/backup
storage/backup@daily-2020-02-08_01-00   0       -       777G    -
storage/backup@auto-20200208.0415-1w    0       -       777G    -
storage/other   357G    34.9T   357G    /mnt/storage/other
storage/other@daily-2020-02-08_01-00    0       -       357G    -
storage/other@auto-20200208.0415-1w     0       -       357G    -

Digging in the crontab i see this:
15 4 * * * root /usr/local/bin/python /usr/local/www/freenasUI/tools/autosnap.py > /dev/null 2>&1

As per quick search autosnap.py
Autosnap handles creating and deleting scheduled snapshots as well as kicking off replication.
Autosnap also takes care of deleting expired automatic snapshots.

But what i am missing is the explanation of the "need" to have it? If this is some sort of "reference" snapshot for the daily ones with 1week lifespan then it does not make sense as the first daily one was taken prior the automatic one.

I am confused and I'd like to know what is happening under the hood here. Official docs says nothing...

Thanks !
 

NAK

Dabbler
Joined
Feb 5, 2020
Messages
16
I get the same strange snapshots. The name suggests they are the ones I set up but not with my given name and run at 4:15
 

garfunkel

Dabbler
Joined
Jun 15, 2012
Messages
41
I have these also. I don't understand where they come from either.

I have hourly snapshots created with "auto-hourly..." in the name, and daily ones with "auto-daily" in the name. But for whatever reason every morning at 4:15 exactly I get these "auto-" snapshots created (recursively). I've checked my config in sqlite3 and I can't find any reference anywhere to these excess snapshot tasks.

Maybe it's some strange bug?

 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
@Patrick M. Hausen I've merged our two threads as we have exactly the same question.

What i can add is that these are not from 11.2 for sure as i never had such version installed. I had 9.10.2 but i never used periodic snapshots there (only few ad-hoc).
More over i did not upgraded to 11.3 but i did a fresh install and imported my pools + deleted all of the existing snapshots (old warden ones + fee custom). No cfg import, everythig was set from scratch.

These auto-snapshots appeared after the new periodic snapshot (with legacy flag unchecked) was executed.

Considering the execution time it is clearly trigered by the autosnap.py. Also it has something to do with the retention period of my snapshosts. I have daily with 1 week retention and the "auto-*-w". In your case you have hourly with 2weeks retention aaaaand i see you have "auto-*-2w"


//Edit: Ok it is a bug NAS-104955
 
Last edited:

marunjar

Cadet
Joined
Apr 7, 2018
Messages
3
I also stumbled upon these 04:15 snapshots too, they were already mentionen in this thread:


There is also an already done issue for this bug, so the fix will be in 11.3-U1: https://jira.ixsystems.com/browse/NAS-104955
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
@NAK @marunjar Thanks! For whatever reason i did not found that one :/
 
Top