Fiddlehead
Cadet
- Joined
- Sep 17, 2015
- Messages
- 4
I'm trying to get a replication task set up in v9.3. I don't plan on storing snapshots though. The NAS is only holding daily backups of another filesystem, which has so much turnover as to make snapshots very bulky and impractical. I only need a stable image of the filesystem to replicate to a remote secondary device; it can then be promptly removed from primary FreeNAS.
Right now, I have daily snapshot task with a two hour window, with snapshots set to expire after 10 hours, followed by a replication task two hours later. As I understand it, the schedule would be:
22:00 Take a snapshot - auto-20150101.2000-12h
24:00 Replicate auto-20150101.2000-12h to secondary
8:00 Snapshot auto-20150101.2000-12h expires
22:00 Take new snapshot auto-20150102.2000-12h
22:xx Remove snapshot auto-20150101.2000-12h
24:00 Replicate auto-20150102.2000-12h to secondary
Snapshots are taken and replicated as expected, but old snapshots still remain behind. So after a few full filesystem turnovers, the primary NAS gets completely filled.
During the snapshot operation, I'm seeing a new snapshot created, and a hold put on it; followed by this line in var/log/messages every minute for the rest of the snapshot window:
I haven't come across any errors, just that operation repeated on the expired snapshot until the snapshot window ends and the replication starts.
Is there something I'm not understanding here about how snapshots are cleaned up?
Right now, I have daily snapshot task with a two hour window, with snapshots set to expire after 10 hours, followed by a replication task two hours later. As I understand it, the schedule would be:
22:00 Take a snapshot - auto-20150101.2000-12h
24:00 Replicate auto-20150101.2000-12h to secondary
8:00 Snapshot auto-20150101.2000-12h expires
22:00 Take new snapshot auto-20150102.2000-12h
22:xx Remove snapshot auto-20150101.2000-12h
24:00 Replicate auto-20150102.2000-12h to secondary
Snapshots are taken and replicated as expected, but old snapshots still remain behind. So after a few full filesystem turnovers, the primary NAS gets completely filled.
During the snapshot operation, I'm seeing a new snapshot created, and a hold put on it; followed by this line in var/log/messages every minute for the rest of the snapshot window:
Code:
srvrbkp01 autosnap.py: [tools.autosnap:58] Popen()ing: /sbin/zfs get - H freenas:state zpool1/nas_bkp_1@auto-20150101.2000-12h
I haven't come across any errors, just that operation repeated on the expired snapshot until the snapshot window ends and the replication starts.
Is there something I'm not understanding here about how snapshots are cleaned up?