Encryption options

Status
Not open for further replies.

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
I have a number of documents I currently keep in a 50GB TrueCrypt image under Windows.

I understand that zfs supports encryption but only at the disk level. What are my options for managing this data and being able to take advantage of the snapshotting and remote replication abilities of zfs whilst still keeping it protected?

i
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
My FreeNAS currently isn't strong enough to handle doing full disk encryption without a large performance impact so what I'm actually doing is I have a few sections of the FreeNAS that are TrueCrypt containers and I let the client computer handle the en/decryption. On the FreeNAS side it will treat it as any other file and make sure it's integrity stays intact as ZFS does.

Basically you can just take said 50GB TC volume and move it over to the NAS if you'd like and just mount it from there.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
I understand that zfs supports encryption but only at the disk level. What are my options for managing this data and being able to take advantage of the snapshotting and remote replication abilities of zfs whilst still keeping it protected?
FreeBSD ZFS does not support encryption. When you enable encryption FreeNAS inserts the geli encryption layer between ZFS and the disks (http://www.freebsd.org/cgi/man.cgi?query=geli). This means that ZFS is not aware of the encryption. However, snapshots are encrypted as they are stored on the same encrypted physical devices. Regarding replication, ZFS itself generates unencrypted replication streams, but FreeNAS uses SSH to encrypt the stream when sending it to the destination.​
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
My FreeNAS currently isn't strong enough to handle doing full disk encryption without a large performance impact so what I'm actually doing is I have a few sections of the FreeNAS that are TrueCrypt containers and I let the client computer handle the en/decryption. On the FreeNAS side it will treat it as any other file and make sure it's integrity stays intact as ZFS does.

Basically you can just take said 50GB TC volume and move it over to the NAS if you'd like and just mount it from there.


What happens when you snapshot and then replicate to a remote this 50GB file though?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It'll be unencrypted.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The TC file will be in its unencrypted form. That is, it won't be encrypted with the geli encryption. It'll just be another file. The contents will be protected by TC.

As for snapshotting, snapshots are either the full file or parts depending on what snapshots you have and whatnot. That's something that you should understand from the snapshot feature explanation in the manual.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Yes, if I mount it and directly rewrite. But if I'm using it as a backup box I'll have to rsync it back. Its not clear to me what happens then. I suspect I need dedup on? And what about replication?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok, stop. Now you are just throwing around words. First you said replication, now rsync and dedup.

Slow down hotshot. Go check out the manual and/or Google. This stuff is explained in detail! I'm not going to retype all that stuff. All this stuff you are asking I answered myself with Google and 20 minutes of my time in 2012.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
I know basically how it works. I've done myresearch. I run zfs on linux. What I /don't/ know, and what I'm looking for here, is some best practice guide to dealing with encrypted data for a Windows machine. All options are open at the moment so expect some thrash with ideas.

And seriously, if you're going to take umbrage cyberjock, best you don't get involved with this conversation.

i
 

Cupcake

Dabbler
Joined
Jan 1, 2014
Messages
42
Like you said, you have three options here (two of which make sense):
  1. Don't encrypt freenas disks and drop the truecrypt file on in just like any other file. Snapshots will work of course because as far as freenas cares it's just another file.
  2. Encrypt the whole volume on freenas. This way everything you put on it will be encrypted. Downside is, you'll have to unlock the volume by hand each time you restart the server (and whoever else needs it must know how to do that or rely on you) and of course the performance drops. How much it drops depends on your hardware. I don't notice anything since I'm acessing my NAS over wifi and my speed is limited to 20mb/s.
  3. Combine 1+2 but that doesn't really make much sense I guess...
Deduplication is useless for you if you use Truecrypt since it is only one file which changes everytime you access it. Deduplication is useful if you have the same file stored multiple times on your volume. One scenario where you *might* use it: You go with option 2 and use geli encryption. When you do backups you always copy all the files to a new folder (with a timestamp for instance) regardless of which files actually changed and which did not. This means that you'll end up with the same file being stored multiple times and only in this scenario deduplication would save you storage space since the files are then physically stored only once, no matter how often you copy said file. However, using dedup AND encryption will put some heavy load on your Freenas system, wouldn't really recommend it.
 

Knowltey

Patron
Joined
Jul 21, 2013
Messages
430
It will hNdle the TrueCrypt file just like any other file because as far as the NAS is concerned it is just another file. It'll handle any changes to it exactly how it would handle for example you going into a Word document and changing a word.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
My evolving thoughts...

I'm /probably/ not going to mount the TC file direct from the NAS box as I don't think I can guarantee access to it all the time I need it. Ditto many other files.

Ideally I'd like to have Windows and a FreeNAS on the same physical machine to fix that issue but that seems pretty much an idea on the bleeding edge right now and I don't want to hang there. So second choices are some kind of auto-synced briefcase/"Windows offline files" type arrangement, or rsyncing stuff as required if needs must.

Unless TC is doing stuff to hide /where/ a file is being updated (unlikely I think as there's an explicit warning about loss of deniability if an opponent is able to track container changes over time, an issue which does not concern me), they rsyncing should involve just writes of the changed data areas, keeping the total data to be snapshotted and remote replicated small.

But if it doesn't why is dedup really not going to help? I think if the changes within the tc file are minimal most of the 50GB will be duplicate blocks, surely?

What I don't understand /for sure/ is what gets sent to the remote system under these circumstances. If the whole 50GB gets rewritten but most is duplicated is the whole 50GB still txed to the remote? If so, rsycing to the remote sounds a better way to go then sending snapshots?
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
My evolving thoughts...
But if it doesn't why is dedup really not going to help? I think if the changes within the tc file are minimal most of the 50GB will be duplicate blocks, surely?

What I don't understand /for sure/ is what gets sent to the remote system under these circumstances. If the whole 50GB gets rewritten but most is duplicated is the whole 50GB still txed to the remote? If so, rsycing to the remote sounds a better way to go then sending snapshots?


Just jumping in, didn't read the whole thread.

I don't know how you would assume that the TC container only contains duplicated blocks - one of the principles of cipher text in an encryption scenario is that it should be indistinguishable from random data. So even if the container is empty, it will look like random data from the outside.

Also, why would you need deduplication in the first place if you are only talking about a single 50 GB file. Hard drives nowadays are capable of holding up to 4 TB each and deduplication consumes a considerable amount of CPU power and RAM to work - only a very few selected people are willing to do this trade-off in very special scenarios.

EDIT:
If you use ZFS replication, only snapshots are being replicated. Snapshots are just containing the difference to the previous snapshots, so they should be pretty small when you only change parts of the container (exactly like rsync works).
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Unless TC is doing stuff to hide /where/ a file is being updated (unlikely I think as there's an explicit warning about loss of deniability if an opponent is able to track container changes over time, an issue which does not concern me), they rsyncing should involve just writes of the changed data areas, keeping the total data to be snapshotted and remote replicated small.
If you want to have some chance of this actually working you need to run rsync with the --inplace switch. rsync normally creates a new copy of the file, updates it and deletes the old version. This would make the snapshots always contain the complete copy of the old file. Read the description of --inplace here: www.freebsd.org/cgi/man.cgi?manpath=freebsd-release-ports&query=rsync (end of the section even contains a comment about snapshots on copy-on-write filesystems).
But if it doesn't why is dedup really not going to help? I think if the changes within the tc file are minimal most of the 50GB will be duplicate blocks, surely?
Dedup would work even without --inplace (it could dedup the blocks of the "new" version of the container file using the unchanged blocks of the "old" file that are still present in the snapshots), but I hope you are aware of the memory requirements for dedup (check the FreeNAS manual). Do not use dedup unless you are 100% that you have enough memory or your pool may become unmountable and there is no way do switch off dedup once it was enabled.
What I don't understand /for sure/ is what gets sent to the remote system under these circumstances. If the whole 50GB gets rewritten but most is duplicated is the whole 50GB still txed to the remote? If so, rsycing to the remote sounds a better way to go then sending snapshots?
With incremental replication only the blocks that changed since the last replication will be transferred.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Just jumping in, didn't read the whole thread.

I don't know how you would assume that the TC container only contains duplicated blocks - one of the principles of cipher text in an encryption scenario is that it should be indistinguishable from random data. So even if the container is empty, it will look like random data from the outside.

Also, why would you need deduplication in the first place if you are only talking about a single 50 GB file. Hard drives nowadays are capable of holding up to 4 TB each and deduplication consumes a considerable amount of CPU power and RAM to work - only a very few selected people are willing to do this trade-off in very special scenarios.

EDIT:
If you use ZFS replication, only snapshots are being replicated. Snapshots are just containing the difference to the previous snapshots, so they should be pretty small when you only change parts of the container (exactly like rsync works).

The TC container itself won't contain dupe blocks but they (probably? may be?) will be duplicates of the previous snapshot of that container. Even though storage is cheap is seems very wasteful to burn 50GB for a few kBs of updated spreadsheet.

Remember the TC is not being written directly so unless dedup is on, or the file rsynced the snapshot will be the full 50GB, no?
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
If want to have some chance of this actually working you need to run rsync with the --inplace switch. rsync normally creates a new copy of the file, updates it and deletes the old version. This would make the snapshots always contain the complete copy of the old file. Read the description of --inplace here: www.freebsd.org/cgi/man.cgi?manpath=freebsd-release-ports&query=rsync (end of the section even contains a comment about snapshots on copy-on-write filesystems).

Dedup would work even without --inplace (it could dedup the blocks of the "new" version of the container file using the unchanged blocks of the "old" file that are still present in the snapshots), but I hope you are aware of the memory requirements for dedup (check the FreeNAS manual). Do not use dedup unless you are 100% that you have enough memory or your pool may become unmountable and there is no way do switch off dedup once it was enabled.

With incremental replication only the blocks that changed since the last replication will be transferred.


rsync --inplace does indeed sound the solution thanks. If necessary I can handle the TC container differently from the other offline files.

i
 

Xris

Cadet
Joined
Mar 22, 2014
Messages
1
It will hNdle the TrueCrypt file just like any other file because as far as the NAS is concerned it is just another file. It'll handle any changes to it exactly how it would handle for example you going into a Word document and changing a word.
Hi,
I am just experiencing a curious problem after installing Freenas 9.2.1.2 on a windows Win7 / x64 computer:
The container mounted with Truecrypt 7.1a are mounted read only. I have tried to create a new volume on the freenas disk, but it doesnt work, the file cannot be created.
On the other hand I can write files on the freenas discs. With FreeNas 9.2.1 everything was working fine!

What kind of problem do I have?
Will my problem be solved if I go back to FreeNas 9.2.1?
 
Status
Not open for further replies.
Top