Creating Degraded RAIDZ2 with Encryption, Deduplication and Compression

Status
Not open for further replies.

Paul Martin

Dabbler
Joined
Nov 13, 2013
Messages
10
I have just created a FreeNAS server for my home network to replace my previous server (running Debian Linux). I want to create a RAIDZ2 with 5x2TB hard drives. I also want to enable encryption, deduplication and compression. My hardware is as follows:

2GHz Core 2 Duo
8GB DDR2 RAM
32GB SSD Cache Drive
5x2TB Hard Drives

I believe that this should be decent enough to at least not crash. I care more about storage space vs performance but would like to stream compressed HD video over gigabit ethernet without stuttering. Encryption is more important than either deduplication or compression since I may need to store confidential work data at some point. One problem is that all of my data is currently on one of the 2TB hard drives. I want to create a degraded RAIDZ2 and copy all of my data to it. Then I want to nuke the 2TB hard drive and add it to the zpool.

My questions are:
1. Is enabling all three of these options feasible on my current setup (the requirements are that it has to be fast enough to stream compressed HD video and it has to be stable)?
2. Is it possible to enable these options with a degraded RAIDZ2 array? Is there any documentation on this?
3. How will the three features interact. It appears that ZFS *should* compress files before encryption. If it does the reverse then obviously there will be a performance hit without any storage savings.

I had no trouble finding information about any of these features individually, the trouble is in figuring out how they all interact.
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
1. No. For deduplication you need a lot more RAM or you'll end up with an unmountable pool. Encryption and compression are mostly dependent on your CPU, be sure to do some benchmarks after creating a pool to see if you are satisfied with the speed. Encryption speeds greatly profit from AES-NI support on the CPU. Otherwise expect 30-40% performance degradation.

From the manual:
If you plan to use ZFS deduplication, a general rule of thumb is 5 GB RAM per TB of storage to be deduplicated. Note that there is no upper limit to how much RAM you may need for deduplication. If you do not have enough RAM you may not be able to mount your pool on bootup. In this case, the only solution is to install more RAM or restore from backup. Very few users will find deduplication provides space savings over using compression.

Also be sure to read Volumes#Deduplication (recommends 8 GB RAM per TB).

2. Why would you want to do that? Anyway, encryption can only be activated on pool creation, for technical reasons search the forums. Compression and dedup can be enabled anytime, but only affects new writes.

EDIT:
3. FreeNSA utilizes geli for encryption. You might want to read up on that and answer your question yourself.

EDIT 2:
Also I don' think you can create a degraded pool.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Please don't use an l2arc with only 8 gig of ram. You're just stealing memory from your already small arc in order to keep track of what's on the l2arc. In a home situation, very few people actually need an l2arc, even if they had the ram to support it. Adding an l2arc with only 8 gig of ram will probably lose you performance instead of gain.

It is possible to create a degraded zpool, but you have to create it from the command line completely manually. I've done it before, but never with encryption on top. You'd have to figure out how freenas creates it's encryption layer, and integrate that with the other command line steps to create the pool. It would be easier to backup and restore to a zpool created in the gui.

As warri has said, enabling dedupe would be a very bad idea. You want a minimum of 5 gig of ram for every TB of data. I see the manual is recommending 8. There's no way to know for sure how much you'll need until you have the data copied. And even then if you end up not having enough ram, you can potentially lose access to your data until you do have enough ram. Being that there's no upper bounds on ram requirements for dedupe, it could be that you'd end up having to put together a machine with 128 gigs of ram in order to access your data.

For typical media / home use, why would you want dedupe? Nothing is going to dedupe anyway, as all movies, etc, will be different. Also along the same lines, why enable compression? Media files (movies, pictures, etc) are already compressed. You'll likely see very little gain from enabling compression. But you'll increase cpu usage, and slow down the pool by doing it.
 

jonnn

Explorer
Joined
Oct 25, 2013
Messages
68
Interesting idea on creating a degraded array, might take advantage of that.

Free nas +zfs + dedup is a massive pig. You will need a new rig, preferably with 8 ram slots.

Did I read somewhere that they are working on optimising dedup in zfs?
 

Paul Martin

Dabbler
Joined
Nov 13, 2013
Messages
10
Got it. I Think I will avoid dedup and just use encryption and compression then unless anyone has had success with a similar setup (8GB RAM + 32GB L2ARC). Dedup would have been nice since I'm planning to archive VMs on here, but I can always buy something bigger and better down the road if I eat up all my storage. Should I just not use the SSD in this system at all then?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Putting a 32GB L2ARC on will substantially stress the modest (~~5GB) ARC you have available. L2ARC requires pointers to be stored in the ARC. So the amount cached in ARC is reduced (probably substantially). If you do not expect your pool to be very very busy, the better use is probably to avoid the L2ARC.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
This just has fail all over it for all the reason described by others.

Without AES-NI you can expect fairly poor performance. I won't discuss dedup since it's already covered. The L2ARC is just not a good idea with only 8GB of RAM. If you had 32GB of RAM, then maybe. Compression is going to hurt performance on a CPU that is 6+ years old.

This just isn't going to work out too well.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
As a general rule of thumb:

E-Penis and FreeNAS do not work well in combination. Please listen to Cyberjock et al, Mr. Martin.
 

Paul Martin

Dabbler
Joined
Nov 13, 2013
Messages
10
Thanks all for the replies.

I think that my best bet is to set up Debian with BTRFS (which supports compression) on an encrypted LVM and then run a deduplication package on top of that. I've had a similar setup in the past but figured I would give FreeNAS a try. The deduplication is relatively important to me since archiving VMs running similar operating systems means a lot of overlap between files. Plus, I have created degraded RAID6 arrays before using dm-raid so I know it's relatively easy to do.

I may not get all of the swanky features that ZFS supports but someday in the future I'll build something better and can play with it then. My storage needs will likely have grown at that point as well anyway.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you read around here a bit you'll learn that VMs of the same OS really don't dedup at all. ;) It has to do with how block structures work and how VMs work, etc etc.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It depends. If you're disk image cloning from an image and actually doing a copy, dedup is great, but you can also use ZFS snapshot and clone to start off without the heavyweight dedupe. ZFS is a copy on write filesystem, and you can leverage that. True, all writes after that point will result in allocated blocks even if they could have been dedupe'd,

It helps to know that ZFS dedupe works on ZFS blocks, not disk sectors, so the success of dup detection may be less than expected.
 
Status
Not open for further replies.
Top