Raid and Raidz Qs

Status
Not open for further replies.

acem77

Cadet
Joined
Jan 30, 2014
Messages
5
I am looking in to a full backup solution.

I would need at least 2 storage solutions.

What I wanted to have was one main higher end freenas server with a raid 10 4x2-3GB drives
and a second lower end freenas server with raid 5 with enough space to back the 1st.

ZFS is a new concept to me. From what I have read
it is the preferred standard.

I had some question about ZFS raidz vs a standarded raid 10.
In the non zfs raid world I have read a lot of bad things about the issues with raid 5 and increased hdd size. Do the same issues affect zfs?

this post has a few link to said risks.
http://forums.storagereview.com/index.php/topic/34094-is-raid-56-dead-due-to-large-drive-capacities/

the main thing I was thinking about was the type of raid to pick and ufs vs zfs
The guide said a mirror is favored over raidz. Does that mean there is no zfs mirror option?
what about raidz like raid 10?


"from freenas guide 9.2.0
When determining the type of RAIDZ to use, consider whether your goal is to maximum disk space or
maximum performance:
• RAIDZ1 maximizes disk space and generally performs well when data is written and read in
large chunks (128K or more).
• RAIDZ2 offers better data availability and significantly better mean time to data loss (MTTDL)
than RAIDZ1.
• A mirror consumes more disk space but generally performs better with small random reads.
For better performance, a mirror is strongly favored over any RAIDZ, particularly for large,
uncacheable, random read loads."


Thanks
 

framewrangler

Dabbler
Joined
Nov 18, 2013
Messages
41
Coming to you as a new convert to ZFS.....

The rebuild time issue that affects raid5/6 is comparable in ZFS. If you lose more disks than your array/pool can tolerate, your storage will fail; it's that simple. That said there are other factors that differentiate ZFS from hardware raid. If you keep reading those 3rd party articles you'll come to all of that.

After a decade of using hardware based raid what really appealed to me about ZFS was the direct disk access that ZFS so craves and benefits from. I have replaced so may disks in hardware arrays that were suspiciously ejected from the array with nearly zero defect or issue. It really appeals to me that ZFS has a more "active" and "dynamic" approach to working around suboptimal disks. Again, don't quote me on this, just look for it in the DOCs and reviews elsewhere.

One question that I don't have the answer to off my head is if there is a difference in rebuild behaviors concerning amount of data read/written when rebuilding/resilvering. For example does a hardware raid solution read/write entire disks during a rebuild regardless of actual data size while ZFS only reads/writes used blocks? I don't know, but it would greatly impact the initial concern about resilvering time and potential for 2nd and 3rd disk failures during recovery periods.
 

acem77

Cadet
Joined
Jan 30, 2014
Messages
5
Does freenas support Striped Mirrored Vdev’s (ZFS RAID10?) this should help with recovery time and chances of losing a 2nd drive during the rebuild

http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/

Striped Mirrored Vdev’s (RAID10)
This is very similar to RAID10. You create a bunch of mirrored pairs, and then stripe data across those mirrors. Again, you get the added bonus of checksumming to prevent silent data corruption. This is the best performing RAID level for small random reads.
 

framewrangler

Dabbler
Joined
Nov 18, 2013
Messages
41
Yes, freenas/ZFS supports RAID10 equivalent stripe of mirrors.
 

framewrangler

Dabbler
Joined
Nov 18, 2013
Messages
41
One question that I don't have the answer to off my head is if there is a difference in rebuild behaviors concerning amount of data read/written when rebuilding/resilvering. For example does a hardware raid solution read/write entire disks during a rebuild regardless of actual data size while ZFS only reads/writes used blocks? I don't know, but it would greatly impact the initial concern about resilvering time and potential for 2nd and 3rd disk failures during recovery periods.

I think this pretty well clears that up. :)
http://docs.oracle.com/cd/E19253-01/819-5461/gbcus/index.html
 
Status
Not open for further replies.
Top