Hi guys,
first of all, I am happy with a link as a reply, that heads me in the right direction. I tried to search for my specific question but didn't get a great result, mainly because I do not know what terms to search for other than "backup changed files only" which does give me a lot of solutions like rsync etc but they all share the same problem: They need to have the full dataset available on the backup to know what files are new etc.
What I am looking for is a way to let a backup solution look at a specific dataset or a specific folder, remember the current status, and then in future backups ONLY transfer changed files to an external drive or another pool that does not contain the full dataset or the whole folder.
Why do I need that? I have a freenas server in the office that currently has over 14tb of data. At my home, I will have another freenas server, with the same data (copied over, replicated, whatever) as a backup.
Now if I create or change files in my office, I want to get the changed files to my server at home. I know this is possible directly over the internet, however, I have concerns in terms of transfer speed (10mbit upload) and security (opening up my servers to the internet).
What I rather want to do is plug in a harddrive into my office server, get all the changed files, take that home, and copy those files over.
As there are no harddrives bigger than 14tb currently, I cannot have the full dataset on this transfer-drive. The server storage will grow to 20tb in the next months probably, so this is not feasible.
So, is there a way to make just an index of the current dataset that can then be used to compare what files need to be copied to the transfer drive?
Thanks!
Tobi
first of all, I am happy with a link as a reply, that heads me in the right direction. I tried to search for my specific question but didn't get a great result, mainly because I do not know what terms to search for other than "backup changed files only" which does give me a lot of solutions like rsync etc but they all share the same problem: They need to have the full dataset available on the backup to know what files are new etc.
What I am looking for is a way to let a backup solution look at a specific dataset or a specific folder, remember the current status, and then in future backups ONLY transfer changed files to an external drive or another pool that does not contain the full dataset or the whole folder.
Why do I need that? I have a freenas server in the office that currently has over 14tb of data. At my home, I will have another freenas server, with the same data (copied over, replicated, whatever) as a backup.
Now if I create or change files in my office, I want to get the changed files to my server at home. I know this is possible directly over the internet, however, I have concerns in terms of transfer speed (10mbit upload) and security (opening up my servers to the internet).
What I rather want to do is plug in a harddrive into my office server, get all the changed files, take that home, and copy those files over.
As there are no harddrives bigger than 14tb currently, I cannot have the full dataset on this transfer-drive. The server storage will grow to 20tb in the next months probably, so this is not feasible.
So, is there a way to make just an index of the current dataset that can then be used to compare what files need to be copied to the transfer drive?
Thanks!
Tobi