In what pattern is data written to a mirror raid array

Status
Not open for further replies.

brianm

Dabbler
Joined
Nov 27, 2017
Messages
25
This may be an academic question but I would be interested to know if I understand the process correctly.

I have four similar 2TB discs in a 2x2 mirror array. If I transfer external data to the pool it seems that each of the four disks is receiving data at the same rate so presumably each disk will receive the same amount of data (rate x time).

If we call the disks 1, 2, 1A and 2A where 1 and 2 could be called the primary pair it makes sense that 1A will receive the same amount of data as 1 and 2A will receive the same amount of data as 2 so 1A and 2A will be exact mirrors of 1 and 2.

To achieve this I imagine the data is sent in constant size packages first to disk 1 and 1A and then the next package goes to disks 2 and 2A - rinse and repeat until there is no more data. If so the write sequence would be 1 then 1A followed by 2 and 2A so the data would be sort of striped between 1 and 2 with a mirror image on 1A and 2A.

This would seem to imply that if any one disk fails then you could rebuild it from its corresponding paired disk. If you lost, say, disks 1A and 2A you could still rebuild the system from disks 1 and 2, but if you lost both 1 and 1A or both 2 and 2A you are dead in the water.

Is that the way things work?

Thanks.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Correct.

If two-way mirrors scare you, you could go with three-way mirrors. The overhead is high, but some of our members do it to minimize the risk of two drives failing at the same time.

This would seem to imply that if any one disk fails then you could rebuild it from its corresponding paired disk. If you lost, say, disks 1A and 2A you could still rebuild the system from disks 1 and 2, but if you lost both 1 and 1A or both 2 and 2A you are dead in the water.
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
If so the write sequence would be 1 then 1A followed by 2 and 2A so the data would be sort of striped between 1 and 2 with a mirror image on 1A and 2A.

I think you are saying it right but just in case, data to 1 and 1a are written at the same time and data sent to 2 and 2a are written at the same time. When reading from disk, data can be read from all 4 disk thus there is no read penalty with Mirrored VDEVs.
 
Last edited:

brianm

Dabbler
Joined
Nov 27, 2017
Messages
25
Thanks Guys,
There is a risk in everything we do but I like to understand what I am doing before I take it.
I am new to RAID so I thought this through and my idea was the only way I could think of the process taking place.
This, of course, brings up another thought. If I make a full backup of a mirrored system like this to a single drive then can I restore from that drive? Does a single disk backup record the dataset layout or only the data? So how much hassle in restoring from a simple backup would there be?
If a simple backup does not reload easily I might just look at a 2x3 RAID as gpsguy suggested.
Time to start looking at replication.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Every sane backup method backs up data and metadata, not RAID layout.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
A simple 'backup' just includes the data that lives on your system and has nothing to do with the pool nor the layout of the dataset. A copy of the data to a single drive or sync to the cloud is a good example. With a restore, you would first make sure that your restore destination is online and healthy and then start a restore of the data on the new/fixed/... pool, again, regardless of the layout.

ZFS replication (send/receive) allows you to transfer a dataset (well technically a snapshot of that dataset) to another ZFS pool. This 2nd pool can also have a completely different layout. It can even live on another system. Restoring will be (imho) easier as the dataset will be exactly restore as it was at the moment the snapshot was taken.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
ZFS replication (send/receive) allows you to transfer a dataset (well technically a snapshot of that dataset) to another ZFS pool.
...and if you want to get strange, it can even live on a non-ZFS filesystem for an indefinite time in between: zfs send pool/dataset@snapshot > file.
 
Joined
Jan 18, 2017
Messages
525
...and if you want to get strange, it can even live on a non-ZFS filesystem for an indefinite time in between: zfs send pool/dataset@snapshot > file.

doesn't that mean even if you accidentally destroyed the dataset as long as you had this snapshot you could restore it?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
doesn't that mean even if you accidentally destroyed the dataset as long as you had this snapshot you could restore it?
AFAIK, you should be able to.
 
Status
Not open for further replies.
Top