- Joined
- Jul 3, 2015
- Messages
- 926
It’s a nice idea would like to try it in practice and consider how to automate it better.
This is a manual ad-hoc suggestion, yes. Create the job. Run it once. Hold the first snapshots on both sides. Done. If the datasets are highly dynamic this is not an easy thing to manage. You would have to manually remove holds and create new ones periodically. You can easily script this with CRON and a bash script if you wanted, but for backups I'd actually prefer the manual aproach.Ok so this is a manual add-hoc suggestion?
Let's assume the source side is compromised and they took that action, the data is fucked on that system. That's why we have a backup system which should have a different root password and/or 2FA.I guess this won’t help the situation the OP has raised around been hijacked and someone letting rip and rolling back all the datasets?
That is fundamentally incompatible with the very concept of a snapshot. The entire purpose of using ZFS send is to ensure snapshots are to exactly replicate data between two pools or systems.Until there is an option where source changes do not affect the destination snapshots in this case such as a rollback.
Ok so this is a manual add-hoc suggestion? I guess this won’t help the situation the OP has raised around been hijacked and someone letting rip and rolling back all the datasets?
Unless they zfs hold one snap on every dataset on both sides?
This is feasible with the command-line. I use it myself. It's not complex to "once in a while" use hold and release to at least offer some sort of "safeguard" against accidents or unintentional snapshot destruction.Then remember to release them from time to time so they don’t run out of space?
Yeah that's fair enough. Hence my point about 2FA. Allowing someone to easily get root access to your box is a non started FOR ANY backup software...To be honest, I don't think any of this matters. If someone gets root access to your TrueNAS box, they can just destroy everything on both sides. (I say both sides, because they have de facto access via the stored SSH keys on the server.)
No SSH? So no command-line from a client terminal. No rsync over SSH. You lose quite a bit from disabling SSH on TrueNAS.Also FWIW you probably should not run your TrueNAS with SSH enabled outside of troubleshooting.
I edited my post to clarify source/destination logic.No SSH? So no command-line from a client terminal. No rsync over SSH. You lose quite a bit from disabling SSH on TrueNAS.
Please help me understand what you mean? If you are replicating via the pull option the only key that you must enter on your source TrueNAS box would be the public key. The source box doesn't have to have SSH access to the destination box. I wouldn't use a push option ever if I cared about the data being protected on the destination. Now if you do not rely on ZFS replication for backup, that is handled with another solution, and you are just using it for replication, access to the data quickly in specific failure situations, PUSH is probably fine.To be honest, I don't think any of this matters. If someone gets root access to your TrueNAS box, they can just destroy everything on both sides. (I say both sides, because they have de facto access via the stored SSH keys on the server.)
So really, any "solution" is only meant to mitigate accidents, mistakes, and unintentional quirks of replications and rollbacks. Using "hold" (or any feature, really) won't help much if someone gains root access.
This is feasible with the command-line. I use it myself. It's not complex to "once in a while" use hold and release to at least offer some sort of "safeguard" against accidents or unintentional snapshot destruction.
Still wish it was integrated into the GUI.
If you are using destination as a backup solution, I would say you would not want what happens to source zfs snapshots to affect destination zfs snapshots. That is a dangerous situation to be in, you better hope you have another backup solution as well.That is fundamentally incompatible with the very concept of a snapshot. The entire purpose of using ZFS send is to ensure snapshots are to exactly replicate data between two pools or systems.
Can you elaborate on the failure mode a bit? I want to make sure I am understanding the ask here.
That's true. Double-dyslexia? I associate references to "source" as local, and "destination" as remote.The source box doesn't have to have SSH access to the destination box.
Taking things out of context for the sake of being argumentative is not constructive.I fundamentally and completely disagree with you. Whether we are talking about ZFS replication or simply having a second copy of your data with RSYNC or some other methodology is literally the definition of a good backup strategy. Also ZFS replication is one of the few solutions to this problem that preserves ACLs and XATTRs...which is critical for some workloads.
For the particulars of ZFS replication, you can literally use the TN wizard in a "set-and-forget" way using the defaults, or you can tune it to do all sorts of advanced things like we have been talking about in this thread.
Pray tell, what "archival" software on a remote system offer better features, performance, reliability, simplicity, etc? You talked about virtual box and why it's differential snapshots are great above. This is exactly what ZFS snapshots do. Since replication literally relies on (requires) snapshots its not a dissimilar situation.
The 3-2-1 backup strategy is no longer sufficient to keep your data safe. You need to think "bigger".following the 3-2-1 backup strategy
You want 4-3-2-1?The 3-2-1 backup strategy is no longer sufficient to keep your data safe. You need to think "bigger".