How to Avoid Accidental Deletion of Dataset with Replication

Joined
Jul 3, 2015
Messages
926
It defo sounds like a nice to have feature but personally just a first step to stopping this happening would be good and I could live with deleting the occasional dataset on the backup system from time to time manually.
 

linchesterBR

Cadet
Joined
Feb 13, 2019
Messages
6
It defo sounds like a nice to have feature but personally just a first step to stopping this happening would be good and I could live with deleting the occasional dataset on the backup system from time to time manually.

Man, I've just ran a test where I have two FreeNAS-11.2-RELEASE-U1 servers replicating to another one (backup). Both primary servers were with the option "Delete Stale Snapshots on Remote System" unchecked. So I've deleted the most of snapshots of both and, after the replication, such snapshots remains in backup server.

Therefore, the only thing missing, in this case, is the retention feature on the backup server side.
 
Joined
Jul 3, 2015
Messages
926
Man, I've just ran a test where I have two FreeNAS-11.2-RELEASE-U1 servers replicating to another one (backup). Both primary servers were with the option "Delete Stale Snapshots on Remote System" unchecked. So I've deleted the most of snapshots of both and, after the replication, such snapshots remains in backup server.

Therefore, the only thing missing, in this case, is the retention feature on the backup server side.
Now what happens if you delete the dataset? Whoops that’s the issue.
 

linchesterBR

Cadet
Joined
Feb 13, 2019
Messages
6
Now what happens if you delete the dataset? Whoops that’s the issue.

When I deleted the dataset on primary server, all snapshots, periodic snapshot and replication tasks related with it disappeared. Therefore, the replication stopped and all snapshots and the dataset remains on backup server.
 
Joined
Jul 3, 2015
Messages
926
Ok so my issue and I guess most peoples issue relating to this is when you snapshot a parent dataset and select recursive. Then if you delete a sub-dataset it is then wiped from the backup. Again feel free to try.

It makes sense if you delete the only dataset you are snapping it will in-tern removed the snapshot schedule and therefore the replication but that is not the issue here.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
You can use zfs hold to prevent accidental dataset deletion.

How to
Create the source dataset, snapshot it with an explicit name, and then lock the snapshot
Code:
zfs create tank1/father/son
zfs snap tank1/father/son@LOCKED
zfs hold LOCKED tank1/father/son@LOCKED

Run the replication, then lock the transfered snapshot on the remote server
Code:
zfs hold LOCKED tank1/father/son@LOCKED

If you want to setup snapshot and locking recursively use the -r option.
Code:
zfs snap -r tank1/father/son@LOCKED
zfs hold -r LOCKED tank1/father/son@LOCKED

Results:
  • Both dataset (source and remote) are protected against accidental deletion.
  • It is better to snapshot when the dataset is empty otherwise the size @LOCKED will grow when the dataset is modified.
  • What ever happen on the source server, the recursive replication can never delete the remote childdataset, because zfs destroy -r tank1/father/son will return EBUSY on the remote server. In particular all remote snapshots will be preserved.
Reference:
 
Last edited:
Joined
Jul 3, 2015
Messages
926
It sounds like a nice idea but what if you have loads of scheduled snapshots that you want to auto age out by putting holds on them then they cant.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Yes the hold is on a special empty snapshot, named LOCKED that exists only to prevent zfs destroy -r to succeed.

Putting the hold on the scheduled snapshots from the FreeNAS GUI would be better feature, but it does not exist yet.
 
Last edited:
Joined
Jul 3, 2015
Messages
926
Ok thanks it’s an interesting idea and I’ll give it a try and report back
 
Top