Storage design for disk based backup system

Status
Not open for further replies.

Greg T

Cadet
Joined
Mar 25, 2017
Messages
2
Hi all, we've been running a freenas 20TB test environment for the last few months and all works great - now we're looking to expand and could use some advice. We are using the FreeNAS as the destination for disk based backups of production VMs and wanted to see if anyone could provide input on the vdevs storage design. I've done a lot of research but some of the advice dates back to 2010, so not sure if it's still relevant, leaving me with questions.

Usage: We backup hundreds of servers concurrently from remote sites, using an appliance at each site to perform the local backup and then send the backups to our primary datacenter. The freenas will be hosted at our primary datacenter, storing all the backups. The backup software also performs merge jobs for each of the recovery points, as each backup passes its retention policy.

We're considering three raidz2 vdevs of 10x10TB HGST 4kn drives - or can I safely make each vdev larger? I want to maximize capacity, but also don't want restores to be slow when we need to concurrently restore dozens of servers for a recovery test (or actual disaster).

We will also have an exact duplicate of this freenas box at a secondary data center, which we'll be snapshotting all the backups to, so while I don't want to ever be in the position of needing to snapshot the entire secondary freenas back to the primary freenas, a full loss of the primary freenas wouldn't be the end of the world - but a complete loss of both freenas would be. Thanks for your help!
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
three raidz2 vdevs of 10x10TB HGST 4kn drives - or can I safely make each vdev larger?
3 x 10 @ RAIDZ2 seems like a pretty good compromise to me, but I have no experience with large arrays. ZFS doesn't impose a technical limit on the number of disks in a vdev, but lots of people say "don't go wider than 12" (for no obvious reason). The point is, the wider the vdev, the longer the resilver time when replacing a failed disk. As resilver times grow, so do the chances of losing another disk during resilver, which is why RAIDZ2 and RAIDZ3 exist.

In other words, "safely" is something you have to define and implement according to your own requirements.
 

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
I'd prob go with 4 vdevs of 8 HDD's each, larger vdevs will have significant performance impacts from what i've read in the past with the highest recomended being 11 but thats using using Z3...
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
I would advice 8wide raidz2.
You will also gain a bit more iops than with fewer bigger vdevs.
And don't forget to fill up the pool above 70- max 80%
You will have performance issues with many concurrent connections writing to your array.


Gesendet von iPhone mit Tapatalk
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
If you are using 4Kn or 512e disks, I would say the 10 would be perfect for raidz2 in terms of space efficiency, unless your data is going to be compressible, and then you can decide on one of the other basis.
 

Greg T

Cadet
Joined
Mar 25, 2017
Messages
2
Thanks for all the input! We're making one major change to the design - we are going to continue using Netapp for backup storage at the primary site and have the backup software replicate the backup sets to the FreeNAS at the remote site (rather than fn to fn snapshots). The theory being that we'll have everything stored on two completely different storage types, so we'll be able to push the limits of the remote fn a bit more comfortably.

My main concern is the rebuild time of a vdev down the road when the drives start to fail. We'll probably go with four of 8x10TB vdevs and once we have the dataset at 50%+ full, we'll pull a drive and see how long the rebuild takes. If the rebuild takes longer than 24 hours, we'll wipe it all out and make the vdevs smaller. I'll post our results in a couple months when we complete the project. Thanks again!
 
Last edited by a moderator:
Status
Not open for further replies.
Top