FreeNAS 9.1.1 + zfs + iscsi storage allocations and replications

Status
Not open for further replies.

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Hi, Guruz,
I've created ZFS mirrored storage of two 3TB hdds. How much space at max should i reserve for zvol? Keeping in mind that i need some of it for the snapshots. Or am i wrong? Moreover, if i'll use this zvol as iscsi extent (read: block device) is there any sense at all to create snapshots? But still need to somehow frequently replicate that zvol to another (hardware identical) 'backup' freenas server.
Here's the screenshot of my current config.
Image%202013-12-18%20at%2012.40.05%20PM.png
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
How much space you should reserve really depends on your data and how often it changes. Snapshots do make sense with zvols. Using snapshots enables to you to do incremental replication (only the blocks that changed between snapshots will be transferred).
With the capacities shown above, if you did not create the zvol as sparse volume you will not be able to create a snapshot as soon as you have more than 628GB stored in the zvol. At creation snapshots do not consume almost any space, however you can only create a snapshot if the space available outside of the zvol is able to accommodate all data currently referenced by the zvol. With sparse volume you will be able to create such snapshot, however you risk the zvol suddenly running out of space when the snapshots start consuming disk capacity as you change the content of the zvol.
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Dusan, thanks for the reply.
So as far as i understand, having 2TB hdd in RAID1 i can only get at max 1TB zvol for the reliable replication to function? (sparse vol could help me only for some time, 'cos free space on zvol tends to 0? )
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Even that won't help. You need two snapshots to do incremental replication (to send the delta between those two snapshots). When you do the first snapshot it will initially consume almost no space. However, as blocks start to change the snapshots' size grows as it "stores" the old version of the blocks. You can now get into a state when ZFS won't allow you to do another snapshot -- for example: the zvol holds 800GB of data, 500GB changed since the last snapshot, that means you now have only 2000-800-500=700GB of free space in the pool. Snapshot is not possible now as you need at least 800GB of free space to do the snapshot.
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Hilarious! So the FreeNAS isn't really a mature system as declared everywhere? :-(
Don't understand why can't we just sync several zvols (on several servers) block by block, without generating snapshots? Hm....
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
I don't understand how more mature it should be. If you have ideas, please explain.
You have two options for replication:.
  1. Full replication -- it will send all blocks containing data, but it will not send "unused" part of the pool. However, as replication takes some time (disk transfer speed, network transfer speed) you must use a snapshot if you want the replication be consistent. If the system was performing replication of an active (changing) zvol without snapshot you would get a mess on the other side. It could be transferring blocks belonging to a changing file and you would get a frankenfile on the other side randomly assembled from old and new blocks.
    You only need one snapshot for full replication. You can even do a snapshot, replicate and immediately destroy the snapshot, but the snapshot is needed so that you transfer a consistent state of the pool (the state it was when you took the snapshot).
  2. Incremental replication -- it will send only the blocks that changed, but you need at least two snapshots, so that it can compute the delta.
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Dusan, sorry for bothering you. Could you advice me please?
I've installed four 3TB hdds on freenas host. Created mirror raid of two disks, zvolume sized 500G in it. How could i periodically replicate it to another mirror (e.g. if i create one on last two hdds) ?
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Hm. "Broken pipe warning: cannot send..." Did you try this method? Or you just suggested to use it? :smile:
Image%202014-01-16%20at%203.51.14%20PM.png
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Hm. "Broken pipe warning: cannot send..." Did you try this method? Or you just suggested to use it? :)
I use it daily. I replicate my important datasets to a local pool (on a drive in a removable tray).
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
On my freenas 9.2.0 it doesn't work :-(
Did you fully follow the manual? You need to do both the "Configure PULL" and "Configure PUSH" parts on the same box.
Especially, did you copy the replication public key to the replication user (root) settings?
There's also a troubleshooting section at the end of the wiki page.
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Thanks, Dusan.
According to the manual you mention:
This command should not ask for a password. If it asks for a password, SSH authentication is not working. Go to Storage → Replication Tasks → View Replication Tasks and click the "View Public Key" button. Make sure that it matches one of the values in ~/.ssh/authorized_keys on PULL, where~ represents the home directory of the replication user.

Still nothing. Still my pipe is broken :-(

BTW, should i copy the key without "ssh-rsa .... Key for replication" header and footer text?
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Finally we did it. After numerous attempts it seems to be working.
Dare to ask you one more question, Dusan ;-)
As far as i suppose i should finally have the exact copy of my initial array on the secondary location. But how can we understand that the copy is fully synchronized with the source?
In your case how do you catch the moment that the "drive in a removable tray" is ready to be safely detached from the host? :smile:
Moreover if u somehow delete few snapshots in the source (to free up some space, for example), should it break the replication? Or it will start over the synchronization?
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
As far as i suppose i should finally have the exact copy of my initial array on the secondary location. But how can we understand that the copy is fully synchronized with the source?
In your case how do you catch the moment that the "drive in a removable tray" is ready to be safely detached from the host? :)
There is no indication in the GUI on the status of the replication. The easiest way to check that the replication is still in progress is to look for the /var/run/autorepl.pid file. If it exists, the replication is running and the file contains the PID of the replication process. The file disappears when the replication finishes.
You can also check for the existence of the latest snapshot on the destination file system (this is visible in the GUI).
Moreover if u somehow delete few snapshots in the source (to free up some space, for example), should it break the replication? Or it will start over the synchronization?
If there is at least one common snapshot between the source and destination the incremental replication should work. If there is not common snapshot you may need to tick the Initialize remote side checkbox (replication task view) to perform a full replication.
There will be a new ZFS feature in 9.2.1 (zfs bookmarks) that improves this -- you will be able to bookmark a snapshot and then use the bookmark as reference for incremental replication even if the original snapshot no longer exists. However, I'm not sure the FreeNAS autoreplication functionality will be modified to take advantage of this (in time for 9.2.1).
 
Status
Not open for further replies.
Top