11.3: Periodic snapshots without lifetime in name!

vicmarto

Explorer
Joined
Jan 25, 2018
Messages
61
With a periodic snapshot configured as this:

c.png


I'm getting snapshot names without the -2w lifetime!!:
# zfs list -r -H -o name -t snapshot destino destino@auto-2020-02-16_15-28 destino@auto-2020-02-16_15-29 destino@auto-2020-02-16_15-30 destino@auto-2020-02-16_15-31 destino@auto-2020-02-16_15-32 destino@auto-2020-02-16_15-33 destino@auto-2020-02-16_15-34 destino@auto-2020-02-16_15-35 destino@auto-2020-02-16_15-36 destino@auto-2020-02-16_15-37 destino@auto-2020-02-16_15-38 destino@auto-2020-02-16_15-39 destino@auto-2020-02-16_15-40 destino@auto-2020-02-16_15-41 destino@auto-2020-02-16_15-42 destino@auto-2020-02-16_15-43 destino@auto-2020-02-16_15-44

The expected behavior is something like: destino@auto-2020-02-16_15-44-2w.

Why? Can you replicate this behavior?
 

rapcore2

Cadet
Joined
Jul 5, 2017
Messages
5
After migration to 11.3 I deleted old periodic snapshot tasks and added new on 11.3.
I noticed missing postfix with snapshot expiration time.
I have exactly the same issue as You described.
 

vicmarto

Explorer
Joined
Jan 25, 2018
Messages
61
Thanks rapcore2 for your answer.

And, are your periodic snapshot respecting the expected lifetime? Seems my systems are NOT destroying these old snapshots without the lifetime in its name...
 

rapcore2

Cadet
Joined
Jul 5, 2017
Messages
5
Hi vicmarto,
I installed brand new FreeNAS 11.3 on VirtualBox (for testing purpose), with one pool - tank and made a test (periodic snapshot every 2 minutes - keep 1 hour).

Screenshot at 2020-02-16 19-42-18.png

Here is my snapshots of tank:

root@freenas[~]# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
freenas-boot 1021M 14.0G 23K none
freenas-boot/ROOT 1021M 14.0G 23K none
freenas-boot/ROOT/Initial-Install 1K 14.0G 1018M legacy
freenas-boot/ROOT/default 1021M 14.0G 1018M legacy
freenas-boot/ROOT/default@2020-02-16-19:21:37 2.93M - 1018M -
tank 4.73G 8.35G 1.05G /mnt/tank
tank@auto-2020-02-16_20-00 0 - 88K -
tank@auto-2020-02-16_20-02 0 - 88K -
tank@auto-2020-02-16_20-04 0 - 88K -
tank@auto-2020-02-16_20-06 0 - 88K -
tank@auto-2020-02-16_20-08 0 - 88K -
tank@auto-2020-02-16_20-10 0 - 88K -
tank@auto-2020-02-16_20-12 0 - 88K -
tank@auto-2020-02-16_20-14 0 - 88K -
tank@auto-2020-02-16_20-16 0 - 88K -
tank@auto-2020-02-16_20-18 0 - 88K -
tank@auto-2020-02-16_20-20 0 - 88K -
tank@auto-2020-02-16_20-22 0 - 88K -
tank@auto-2020-02-16_20-24 0 - 88K -
tank@auto-2020-02-16_20-26 0 - 88K -
tank@auto-2020-02-16_20-28 0 - 88K -
tank@auto-2020-02-16_20-30 0 - 88K -
tank@auto-2020-02-16_20-32 0 - 88K -
tank@auto-2020-02-16_20-34 0 - 88K -
tank@auto-2020-02-16_20-36 0 - 88K -
tank@auto-2020-02-16_20-38 0 - 88K -
tank@auto-2020-02-16_20-40 0 - 88K -
tank@auto-2020-02-16_20-42 0 - 88K -
tank@auto-2020-02-16_20-44 0 - 88K -
tank@auto-2020-02-16_20-46 0 - 88K -
tank@auto-2020-02-16_20-48 56K - 88K -
tank@auto-2020-02-16_20-50 56K - 88K -
tank@auto-2020-02-16_20-52 56K - 9.11M -
tank@auto-2020-02-16_20-54 64K - 874M -
tank@auto-2020-02-16_20-56 72K - 1.65G -
tank@auto-2020-02-16_20-58 72K - 1.85G -
tank@auto-2020-02-16_21-00 0 - 1.05G -

It seems work.
But I didn't test it on my production FreeNAS yet.

In my opinion the old snapshots format was more intuitive.
Greetings :)
 

vicmarto

Explorer
Joined
Jan 25, 2018
Messages
61
It's missing also the lifetime in the snapshot name... :(

tank@auto-2020-02-16_20-00 instead of tank@auto-2020-02-16_20-00-1h

Thanks for your testing.
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
It's missing also the lifetime in the snapshot name... :(

tank@auto-2020-02-16_20-00 instead of tank@auto-2020-02-16_20-00-1h

Thanks for your testing.
Please open a bug report and post the number here.
Time to get it in -U1.
 

vicmarto

Explorer
Joined
Jan 25, 2018
Messages
61
Seems my systems are NOT destroying these old snapshots without the lifetime in its name...

After more testing, this is NOT true: "Seems my systems are NOT destroying these old snapshots without the lifetime in its name", periodic snapshots ARE destroyed as expected. The bug is only in its name.
 

vicmarto

Explorer
Joined
Jan 25, 2018
Messages
61
Well, I don't know, we will let to the developers to decide this. My 11.3-RELEASE install says this after press the (?) button:

Snapshot name format string. The default is snap-{0}-{1}-{2}-{3}-{4}. Must include the strings {5}, {6}, {7}, {8}, and {9}, which are replaced with the four-digit year, month, day of month, hour, and minute as defined in strftime(3). A string showing the snapshot lifetime is appended to the name. For example, snapshots of pool1 with a Naming Schema of customsnap-{10}{11}{12}.{13}{14} and lifetime of two weeks have names like pool1@customsnap-20190315.0527-2w.

But the online help says this:

Snapshot name format string. The default is auto-%Y-%m-%d_%H-%M. Must include the strings %Y, %m %d, %H, and %M. These strings are replaced with the four-digit year, month, day of month, hour, and minute as defined in strftime(3). Example: backups_%Y-%m-%d_%H:%M
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
My 11.3-RELEASE install says this after press the (?) button
At a minimum a bug in the documentation, then. IMO, the change in snapshot naming isn't a bug; it appears to be a deliberate change.
 

rapcore2

Cadet
Joined
Jul 5, 2017
Messages
5
At a minimum a bug in the documentation, then. IMO, the change in snapshot naming isn't a bug; it appears to be a deliberate change.
Ok. So how to simply recognize snapshot lifetime instead of getting zfs properties :
zfs get all tank@auto-2020-02-16_20-00
and looking for creation time:
tank@auto-2020-02-16_20-00 creation Sun Feb 16 16:20 2020 -
In my opinion previous snapshot names was much much better and intuitive .
Greetings :)
 

seanm

Guru
Joined
Jun 11, 2018
Messages
570
At a minimum a bug in the documentation, then. IMO, the change in snapshot naming isn't a bug; it appears to be a deliberate change.

If it's deliberate, in what way is it better than the old naming scheme?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If it's deliberate, in what way is it better than the old naming scheme?
If the devs' only intent in the naming scheme was for the information to be machine-readable, and they're obviously managing to get the machine to read the information in another way, it isn't needed any more. My suspicion is that they weren't really seeing (or thinking about) any value in that information being human-readable.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
I'd vote for keeping it in the "human readable" name.
REASON: If I'm looking at snapshots I will know when something I am depending on is going to get swept away.
If I need more time to recover from something, then I could just rename the automatic snapshot to something else. I could then take as much time to figure things out as I need. Once I am done, I can just zfs destroy the snapshot myself. If it is done in meta-data that isn't easy.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Ok. So how to simply recognize snapshot lifetime instead of getting zfs properties :

and looking for creation time:

In my opinion previous snapshot names was much much better and intuitive .
Greetings :)
The creation time already matches the snapshot name. Is there a different property that lists the expiration date or retention time?
 

Fredda

Guru
Joined
Jul 9, 2019
Messages
608
If it's deliberate, in what way is it better than the old naming scheme?
With the new scheme the "previos versions" feature for restoring from old versions of a file from snapshots via windows explorer will now work on all snapshots. In 11.2 you where only able to select from one of your snapshot tasks due to limitations of the shadow:format matching patterns.
 

Elavanis

Cadet
Joined
Feb 20, 2020
Messages
2
After more testing, this is NOT true: "Seems my systems are NOT destroying these old snapshots without the lifetime in its name", periodic snapshots ARE destroyed as expected. The bug is only in its name.

I think it is only half working. After upgrading to 11.3 I ran into some issues with snapshots not being replicated and I was attempting to recreate my snapshots and replication tasks in the new system and in the process lost ever shapshot older then 2 weeks. After that I rolled back and created two vms to test with till I figured out what happened.

I do know if you create a shapshot with a year retention every hour then delete that snapshot task and make a new one with an hour retention it will delete old snapshots older then an hours. It behaves as if the snapshot finds anything that matches its name and deletes it if it is older then its retention period.
 
Top