How robust should I make my backup server

Status
Not open for further replies.

tfast500

Explorer
Joined
Feb 9, 2015
Messages
77
I am moving my current server (i5 2500k - 24gb ddr3 No ECC - 6x3tb - Raidz2) to a Dell 8 Bay R510 64GB ECC
I plan on using the i5 server as a backup once I migrate over.
I just purchased 6 x 2tb for it.

My overall goal is to be able to replicate to the backup server so I can add 2 more 3tb drives and rebuild the pool on the new 8 bay Dell.

Currently my data takes up about 5tb

-> Should I configure the backup server in a raidz1 or raidz2 configuration
I am assuming my storage capacity would be 9.5tb in raidz1 or 7.6tb in raidz2

Both would work to hold my current requirements. However if I grow I would rather have the 9.5tb available.

Would this be the suggested configuration since this is primarily only a backup of a raiz2 or would you guys just use raidz2 on the backup also?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Would this be the suggested configuration since this is primarily only a backup of a raiz2 or would you guys just use raidz2 on the backup also?
I would use at least RaidZ2 (if not RaidZ3). After all if SHTF last thing I would want is my backup to be questionable. That is just me though...

*** Side note, been mulling over CrashPlan since others were mentioning it. May want to give that a look as well, it sounds like it would work nicely for an "Off Site" Backup for $60.00/year...
 

tfast500

Explorer
Joined
Feb 9, 2015
Messages
77
I would use at least RaidZ2 (if not RaidZ3). After all if SHTF last thing I would want is my backup to be questionable. That is just me though...

*** Side note, been mulling over CrashPlan since others were mentioning it. May want to give that a look as well, it sounds like it would work nicely for an "Off Site" Backup for $60.00/year...
I guess that does make sense. I think I was looking at it the wrong way.

My upload is pretty slow I don't think I want to go that route I have not looked at that service tho. Good suggestion.

Sent from my Nexus 6P using Tapatalk
 
Joined
Feb 2, 2016
Messages
574
Go with RaidZ2. Don't worry about giving both source and target pools the exact same amount of disk space. Just make sure your entire expected backup set can fit within the target pool.

For best performance, you really don't want your pools more than 50% full. I aim to keep my production machines even lower. Spindles are storage are relatively cheap, right?

On my backup, replication host, I'm far less concerned with performance, I just need the data safe. On my replication host, I'll happily let my pools get 80% full, run less RAM than recommended and turn compression up to foolish levels (gzip9, for example) if I can cram more data into that 80%.

To that end, my backup host often has less than half the space of the primary and still has plenty of room for all the replicated data.

For example, I have a 12TB lz4 strip mirrored pool on my primary host replicated to a 6TB gzip9 RaidZ2 pool on my backup host. The backup pool is 70% full and that's just fine.

Cheers,
Matt
 

tfast500

Explorer
Joined
Feb 9, 2015
Messages
77
Go with RaidZ2. Don't worry about giving both source and target pools the exact same amount of disk space. Just make sure your entire expected backup set can fit within the target pool.

For best performance, you really don't want your pools more than 50% full. I aim to keep my production machines even lower. Spindles are storage are relatively cheap, right?

On my backup, replication host, I'm far less concerned with performance, I just need the data safe. On my replication host, I'll happily let my pools get 80% full, run less RAM than recommended and turn compression up to foolish levels (gzip9, for example) if I can cram more data into that 80%.

To that end, my backup host often has less than half the space of the primary and still has plenty of room for all the replicated data.

For example, I have a 12TB lz4 strip mirrored pool on my primary host replicated to a 6TB gzip9 RaidZ2 pool on my backup host. The backup pool is 70% full and that's just fine.

Cheers,
Matt


Thanks for this I had not thought about compression. Honestly I have only ever used the default. Is this safe what are the caveats to cranking this up?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
The caveats of cranking it up are more CPU time needed to read/write the data.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
good advice from others on here and I would also go with raidz2 when transferring data.

Just for another example though, I have my backup freenas box setup to use raidz1 because the drives I have in it could not hold all the data if configured in raidz2. i am ok with this because it is just the backup server and if would go under, my data would still be safe. For me, the risk of simultaneously losing my primary and my backup within the same window is extremely rare and worth the risk.

but for your case of moving your data in order to setup your primary, that would expose you to risk of having your data in only one place and therefore raidz2 would be much safer.

On my backup, replication host, I'm far less concerned with performance, I just need the data safe. On my replication host, I'll happily let my pools get 80% full, run less RAM than recommended and turn compression up to foolish levels (gzip9, for example) if I can cram more data into that 80%.

this is my thinking as well. I'm running minimal RAM in my backup. I never thought though to turn on higher levels of compression. I might look into that.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Joined
Feb 2, 2016
Messages
574
I must admit, @Bidule0hm, having me quoted at me was a bit of a thrill.

(We did end up turning on gzip9 for a new replicated pool because it saved 27% over lz4 which was meaningful. As with everything else, your mileage may vary.

Another tip I have seen but can't personally vouch for... Turn on gzip9 for the initial replication of a large data set. It'll be slower than lz4 but, for an initial load, that isn't critical. Then, once in production, set the pool to lz4 so that incremental data is replicated more quickly. Data that never changes will sit out there compressed as gzip9 until it is updated then becomes lz4. I've tested this to make sure it works but I'm not sure it's a good idea. I'm hoping someone else will be the guinea pig.)

Cheers,
Matt
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It should work... It's an interesting idea for sets of mostly-static data...
 

tfast500

Explorer
Joined
Feb 9, 2015
Messages
77
Received the drives today. Configured raidz2 and enabled the gzip9 compression. Setup the replication and it has currently moved over ~480gb of ~5.7tb.

Super easy to setup I was surprised!

Question: If I lost power, network, or manually stopped the replication would it corrupt and error out or would it resume where it left off?

The only reason I ask is I am having a heck of a time with setting up my new pfsense router and now I want to reboot it to apply a change. I'll wait till the FreeNas finishes tho. I just was curious what would happen the the data/task?

Sent from my Nexus 6P using Tapatalk
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Question: If I lost power, network, or manually stopped the replication would it corrupt and error out or would it resume where it left off?
I've found that if the connection is lost during the initial replication, then I had issues, but once that was completed, future replication tasks could be interrupted and restarted. I think as long as there is a matching snapshot on both boxes you are good. Then if it gets interrupted, it just starts over from the last good snapshot.
 

tfast500

Explorer
Joined
Feb 9, 2015
Messages
77
I completed the replication my CPU on the remote server was pegged at about 80% the whole time

I have only about 1.2 TB of storage left on the remote. I wish I bought two more 2tb to make it an 8 drive pool. ( Will have to replicate everything over again when I rebuild the pool)

For what I need right now it will work. Now I can rebuild my main server pool with 8 x 3tb by adding the x2 3tb drives

I assume I have to set it up for the reverse so that my main acts as the pull and my remote is the push?

Sent from my Nexus 6P using Tapatalk
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
I completed the replication my CPU on the remote server was pegged at about 80% the whole time

So this would suggest that compression happens once the data reaches the remote server? Can someone confirm/deny this?

What are the specs of your replication target machine out of curiosity?
 

tfast500

Explorer
Joined
Feb 9, 2015
Messages
77
So this would suggest that compression happens once the data reaches the remote server? Can someone confirm/deny this?

What are the specs of your replication target machine out of curiosity?

My backup server is now my previous NAS but now with 6x2tb - i5 2500k - 24gb ddr3 No ECC - 6x2tb
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
So this would suggest that compression happens once the data reaches the remote server? Can someone confirm/deny this?

What are the specs of your replication target machine out of curiosity?

Compression would happen when the data is written to the compressed dataset/vdev on the other end of the ssh link

But you can use compression on an ssh link. That would then involve possibly

That compression is purely for transmission though, and would be removed before the data was delivered into "zfs receive"

I'm not sure if "zfs send" will send compressed blocks compressed. I'd guess it would.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm not sure if "zfs send" will send compressed blocks compressed. I'd guess it would.
Not at the moment, but that's a feature that's being worked on.

Hopefully, if the source and destination use the same compression, it'll be possible to just send the compressed blocks and write them straight to disk.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Not at the moment, but that's a feature that's being worked on.

Hopefully, if the source and destination use the same compression, it'll be possible to just send the compressed blocks and write them straight to disk.

Right, which means, currently, assuming a compressed ssh link, you have a decompression on read, then a compression, on transit, then a decompression on the other end of the transit, and then a recompression on the receive :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yeah, pretty much.
 
Status
Not open for further replies.
Top