what is the best raidz configuration, and how to set it up?

Status
Not open for further replies.

jsmith8954

Cadet
Joined
Jun 9, 2011
Messages
8
Ok, so i set up a test box with a simple share of 1 HD and got everything working.
Now, I'm about to build a production NAS to put on my network. I've already got all the parts including 4 WD 1TB Blue drives. I intend on using the built in software raid-z.
I'm curious how others did their similar setups, and what exactly do i need to set up?
I can get the volume created, but what is the best configuration. I know raid-z has an option for a "spare" drive, etc.

I guess what i'm asking is that, what do I need to configure to ensure that if i loose a disk (i'm sure i will at some point) that i will not loose data, and that i can replace the drive an allow freeNAS to "rebuild" the data?

Thanks in advance!
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
I guess what i'm asking is that, what do I need to configure to ensure that if i loose a disk (i'm sure i will at some point) that i will not loose data,

I'd suggest a regular backup solution. :)

Remember, a NAS (even with RAID) is not a backup. If you don't want to lose data then make sure you are backing up your NAS to another server/site on a regular basis.

In terms of the best config for your NAS, if data loss is your highest concern and you have no backup then I would suggest RAIDZ2 across the 4 disks, this way you can survive up to two disk failures but you will lose half your storage capacity (so 2TB usable). Is 2TB enough for you, or do you need more? You can expand capacity at a later date by swapping out each of the 1TB disks one at a time for 2TB or even 3TB disks - once all four of new disks are online the vdev should expand automatically doubling or tripling your usable storage.

RAIDZ1 on 3 disks with a hot spare will also result in only half your storage being usable (or alternatively, unusable!)

RAIDZ1 on all 4 disks will give you 3TB usable storage.

Two vdevs each with RAID1 (mirroring) across two disks then combining the two vdevs into a single zpool will give you double the IOPS (ie. performance) of a single larger vdev/zpool, and you can still survive two disk failures as long as they're not from the same vdev (2TB usable in this configuration).

You need to decide what's most important... data redundancy/availability, read/write performance, or available storage then make a decision on what strategy works best for you.
 

jsmith8954

Cadet
Joined
Jun 9, 2011
Messages
8
Hey Milhouse, thanks for the reply!

I'd suggest a regular backup solution. :)

Remember, a NAS (even with RAID) is not a backup. If you don't want to lose data then make sure you are backing up your NAS to another server/site on a regular basis.

I intend to have an automatic backup made everynight to an external HD. I've learned over my years to never be dependent on 1 drive/system.

Knowing that, I intend to go with a raidz-1. That will give me 3tb of usable space. Is there anything else i need to configure? such as ZFS replication tasks or snapshots?

Another question: if a drive ever failes, what would i need to do after replacing it to rebuild that data?

I've just never done much "raid" configuration and starting to learn. I've had shares out of a Windows 2008 server box, and now i plan to upgrade to 2008 R2 in virtual environment with vmware ESX, so i figured now would be a good time to covert all data to NAS and start playing with iscsi as well.

thanks!
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Knowing that, I intend to go with a raidz-1. That will give me 3tb of usable space. Is there anything else i need to configure? such as ZFS replication tasks or snapshots?

Not really an expert on ZFS replication or snapshots - I backup my ZFS-based NAS using good ol' rsync. :) I would suggest finding some good reading matter on ZFS replication and snapshots (starting with the FreeNAS 8 FAQ.

Replication is a better alternative to rsync in a fully ZFS environment (which mine isn't, hence why I've not bothered looking at replication) and snapshots, if I understand them correctly, may only be an advantage if the data on your NAS is changing fairly often/rapdily. For a standard storage/backup solution they're unlikely to be of much use IMHO.

Another question: if a drive ever failes, what would i need to do after replacing it to rebuild that data?

Typically, remove the failed drive, replace it with the new drive and then let ZFS restore redundancy by synchronising the data ("resilvering" in ZFS-speak) across to the new disk.

Oh, and if you only have RAIDZ1, cross your fingers that a second drive doesn't fail during the resilver process! :)
 

jsmith8954

Cadet
Joined
Jun 9, 2011
Messages
8
Thanks Milhouse, its good to have a better understanding of the software before I put it into production :)

I'm putting the project on hold for a few more days, ordering a new case that can hold 8 drivers for future expansions and has better airflow.

In the meantime, i'm gonna dig around and read as much as possible on raid-z and zfs
 

limerick

Dabbler
Joined
Jun 1, 2011
Messages
11
this whole zfs thing is 'greek' to me.. but if i undertand the concepts...
i have a 4u server case with 6 'hot swap' bays in front and am going to
6 - 2tb drives so i could have 3.5tb per pair under (striped equivalent)
with 2 'pairs' and the 5th and 6th drives as the 'parity' drives, so...

3.5 tb pairs in pool for 7tb total protected at zfsR2 by 1 pair of 'parity'
(for lack of better term) drives 2x2tb. does that sound like the idea
in at least laymans terms?
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
this whole zfs thing is 'greek' to me.. but if i undertand the concepts...
i have a 4u server case with 6 'hot swap' bays in front and am going to
6 - 2tb drives so i could have 3.5tb per pair under (striped equivalent)
with 2 'pairs' and the 5th and 6th drives as the 'parity' drives, so...

3.5 tb pairs in pool for 7tb total protected at zfsR2 by 1 pair of 'parity'
(for lack of better term) drives 2x2tb. does that sound like the idea
in at least laymans terms?

You can create two vdevs using RAIDZ1 each with 3 disks, this will use 1 parity disk per vdev leaving each vdev with 4TB usable storage, then combine the two vdevs into a single storage pool giving you 8TB of storage.

Or, create a single vdev using RAIDZ2 with the 6 disks, this will use 2 parity disks and give you the same 8TB of storage. The advantage of two vdevs is that it will double your IOPS compared with a single vdev, but you do run the slightly increased risk of losing all your storage if two disks die in the same 3-disk vdev.
 

limerick

Dabbler
Joined
Jun 1, 2011
Messages
11
well, i'm assuming that the 2TB per drive is actually more like 1.5TB usable when all the overhead, 'striping' equivalent etc is done.. nonetheless,

i do like the idea of 2 parity drives.. suppose that two parity drives fail simultaneously when the the other 4 data drives are one 'vdev'.. still 'rebuildable' (re-silvered ?) on the fly?

i'm leaning more towards raidz2.. :) not so much concerned about iops... is gigabit network speed, some like to be 'tuners' i'm sure.. shooting for 100 megabytes plus, literally.. i'd be happy with 50 megabytes plus.. if i get more, i'd consider it 'gravy' :) :)

also can new disks be added simply by going through the add process and have them assimilate without problems and have recognized with the increased overall capacity?

my freenas build specs thus far:
asus mobo, 3.0ghz amd athlon2 quad core cpu, 4gb ram,
2x2tb wd green "a/v" drives data drives <- so far... 40gb
os drive, soon to be swapped with cf/ide adapter for os
drive, add more 2tb drives as can be afforded...
still in experimental stage.. but essentially built-out.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
also can new disks be added simply by going through the add process and have them assimilate without problems and have recognized with the increased overall capacity?

You can't add more disks to an existing vdev, but you can add another vdev to an existing zpool (volume). However if you create a vdev today with 4 disks, then later on buy another 2 disks you're only option would be to create a second vdev that is mirrored. Alternatively, destroy your existing volume and rebuild it using all 6 disks from backup.

The other way to expand storage capacity is to replace disks one at a time with disks of a larger capacity - when you replace the final disk the vdev should expand to use all the available storage capacity.
 
Status
Not open for further replies.
Top