SOLVED Growing ZFS Volume, or efficient way to move data and recreate Volume

Status
Not open for further replies.

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Hi all,

Firstly, a little backstory that has lead to my question today. I am a network engineer by trade, so some of the concepts of ZFS and storage are new but, being a good nerd I can't help dabbling.

I'm running FreeNAS-9.2.1.9-RELEASE-x64 for my home storage - pretty simple setup; SMB shares & a few jails. I didn't plan my storage requirements very well - I have 3 x 2TB drives in RAIDZ1 - and now it is very close to full. Facepalm for leaving this issue until 1 minute to midnight!

I've been exploring my options for increasing the size of my ZFS pool, sticking to RAIDZ1 (Home use, little redundancy is fine). I'd like to add two more 2TB drives, for total 5 x 2TB RAIDZ1. Figuring my two options are;
A) Remove all data, backup settings & data in jails. Trash existing Volume, add drives and create new Volume. Restore data. Setup jails again, restore backups.
B) Expand existing Volume.

Moving the data will be a pain, as I will need to buy a 4TB+ drive for the single purpose of backing up all the data temporarily. Then it will have no use. So, option A is both time consuming and introduces additional expense.

Speaking with a Sys Admin at the office, they seemed to believe that it could be possible to expand the existing Volume with additional drives. This doesn't seem possible via the FreeNAS web GUI, however may be possible from the CLI. I've done some researching about expanding a FreeBSD ZFS volume, but mostly am finding articles about replacing existing disks with larger ones.

So, is it possible to add additional disks to a ZFS volume, and expand the useable space while also sticking to RAIDZ1?

Thank you.
 
Joined
Apr 9, 2015
Messages
1,258
With FreeNAS you have three options, replace each drive with a larger version, add a second vDev or backup the pool, recreate it and then restore the data. In place expansion of a vDev is not possible. From my understanding using the ZFS version that Solaris owns you could do that but OpenZFS does not have the support to add more drives to an existing vDev.

Right now the best solution for you will likely to buy three drives and expand your pool with another vDev since redundancy is not a problem. But if you are looking at drives larger than 2TB it is not really a good idea to use raidZ1.
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Ok, so not the answer I was hoping for but thought as much :) Thanks for confirming.

Backing up and recreating the pool will be my option going forwards. I guess this will allow me to start fresh again. The plan, at the moment will be to connect a drive with USB or eSATA (if I have it), format this as UFS and start copying the data. Or, if I have eSATA, perhaps I could create a single-drive ZFS pool and use zfs send/receive, for ease.

I also have an old box laying around, which would be severely underspeced (only 2GB RAM) - perhaps I can use this as a destination for zfs send/receive.

Doing a plain old cp to an external UFS drive will probably be the go, as I could probably start on this pretty quickly with minimal setup work versus configuring another server with a ZFS pool. Could you see any potential problems with this?

Thanks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I also have an old box laying around, which would be severely underspeced (only 2GB RAM) - perhaps I can use this as a destination for zfs send/receive.

Yeah, I wouldn't even consider this. I can't even guarantee you that the system would bootup and "function" with just 2GB of RAM. You should have 8GB of RAM, even if its just for backups.
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Thanks, cyberjock. I was thinking out loud, and thinking about it now - I don't want to do that, too much work to have to set up another system. I'll be babysitting copying files for the next week ;)

A little bit of pain but I am going to take the opportunity to do some software upgrades while my server is out of commission. Once I have my pool together, the next time I run out of space I will just need to buy 5 new HDD's and replace one by one.

Thus said, I'll have 8TB useable space and will probably turn on compression if the RAM overhead isn't too high (have 10GB in current server) so that should last me a little while. When that's full I will replace the machine and spec out something with more drives and min RAIDZ2 (as, I will no doubt be using 4+ TB disks by then).
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
if I have eSATA, perhaps I could create a single-drive ZFS pool and use zfs send/receive
Do this.
will probably turn on compression if the RAM overhead isn't too high
I'm not aware of any measurable RAM overhead with dataset compression. With LZ4 being so fast, there's no reason to disable it.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
HI depasseg ... ended up getting a 4TB USB drive. Got it connected, mounted as a single-drive striped ZFS volume.. now to tackle moving the data over :)
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Thought I had this one solved. On Sunday afternoon I started the zfs send/recv from an SSH session. I calculated approx 37 hours to transfer ~3.5TB over USB2.0.

I made a zfs snapshot of the volume I wanted to backup (not backing up jails, just a volume of storage/data), and started the process. Away it went, albeit with no progress or any indication it was working. Eventually the SSH session timed out, or maybe my laptop slept as I checked it in the evening and got the ol' 'Broken pipe' error in my terminal.

So, now I have my additional drives and a long weekend to sort it out... But, when looking in volume manager, I still have 3.5TB of available space on the Backup drive - I guess the zfs send/recv didn't work!

I wonder if I am going about this right - make a snap in the GUI, the from SSH I ran zfs send /vol0/Storage@storagebackup | zfs recv Backup (or something, forgive me if I got the syntax wrong). Away it went, you know where it doesn't give you any feedback but somethings happening because I didn't get my prompt back :)

On to question time - Is it possible to do a zfs send/recv with some feedback showing any progress? Is there a verbosity flag I can add onto these commands?

And, will the send/recv NOT work if the volume is being written to? (While I was doing the send/recv, my jails would've been writing data to the Storage volume)

Lastly, and a little off-topic perhaps - upon a successful zfs send/recv - if I do another one from the same snapshot (after data had been written, and the snap is expanding) will it just send the changed/new data or will it send the entire volume again? ie. does zfs send work.. incrementally?

Thanks for everyone's patience!
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Started again ... Created a snap of the volume I want to move vol0/Storage

Ran this
zfs send vol0/Storage@bckp | zfs recv -F Backup

Here's an output of zfs list fyi (with other stuff taken out)
NAME USED AVAIL REFER MOUNTPOINT
Backup 612K 3.57T 144K /mnt/Backup
vol0 3.54T 24.5G 304K /mnt/vol0
vol0/Storage 3.50T 24.5G 3.50T /mnt/vol0/Storage
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
On to question time - Is it possible to do a zfs send/recv with some feedback showing any progress? Is there a verbosity flag I can add onto these commands?

And, will the send/recv NOT work if the volume is being written to? (While I was doing the send/recv, my jails would've been writing data to the Storage volume)

Lastly, and a little off-topic perhaps - upon a successful zfs send/recv - if I do another one from the same snapshot (after data had been written, and the snap is expanding) will it just send the changed/new data or will it send the entire volume again? ie. does zfs send work.. incrementally?
The -v flag on send/recv will show progress.

A failed replication will leave the destination unchanged. Resumable replications are coming to FreeBSD 10.x, not sure of the details.

Replicating the same snapshot will not send any new data. To do incremental replication, you need to make a new snapshot and then do an incremental replication.

Check the documentation for details.

I meant to also say, for best results, initiate your send/recv from the console shell, to avoid problems with broken SSH connections.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
initiate your send/recv from the console shell, to avoid problems with broken SSH connections.
...or use tmux to create a session that you can attach/detach at will.
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Okay ... maybe it was a broken ssh pipe that was causing the send/recv to bomb out.

I started it through the freenas shell in the GUI, with -v on both send and recv. That GUI shell doesn't seem to have a scrollable buffer so I didn't catch the start of the output but it said something about creating Backup/Data@bckp, and now;

11:49:49 8.78G vol0/Storage@bckp
11:49:50 8.82G vol0/Storage@bckp
11:49:51 8.85G vol0/Storage@bckp
11:49:52 8.88G vol0/Storage@bckp
11:49:53 8.91G vol0/Storage@bckp
11:49:54 8.95G vol0/Storage@bckp
11:49:55 8.98G vol0/Storage@bckp
11:49:56 9.01G vol0/Storage@bckp
11:49:57 9.05G vol0/Storage@bckp
11:49:58 9.08G vol0/Storage@bckp
etc...

That seems good, seems like it's doing something which is a step further :)

Now - when I started the command (zfs send vol0/Storage@bckp | zfs recv Backup/Data), and it said it was creating Backup/Data@bckp I'm thinking - is it making a new snapshot, on the Backup volume and I will need to restore this new snapshot? Or, should I expect a full working duplicate of the vol0/Storage filesystem - ie. if Backup is mounted at /mnt/Backup, I should be able to browse into /mnt/Backup/Data and see everything that was in /mnt/vol0/Storage, as of when the snap was taken?

Thanks for everyone's patience, I am determined to get this working and not resort to cp...
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
So, came back to my PC, was back to a prompt and couldn't scroll up in the GUI shell - but browsing to /mnt/Backup and it's still empty. zfs list shows Backup as empty as well.

Has created a new snap that i need to restore? Tried zfs restore but that doesn't seem to be a supported command? Tried mounting Backup/Data, but nope, doesn't exist.

Is it because I am trying to move a dataset, instead of a volume? Is there any logging that I can go back to?

I've been researching a reading so many blog posts but pretty much all it is is "run zfs send/recv and you're good" type articles - haven't been able to find anything about troubleshooting...
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Tried zfs restore but that doesn't seem to be a supported command?
Are you just winging it at this point? zfs restore? What is your plan? The link I gave you is a decent starting point. You might want to consider using the GUI for the snapshots and replication.
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Are you just winging it at this point? zfs restore? What is your plan? The link I gave you is a decent starting point. You might want to consider using the GUI for the snapshots and replication.

Winging it? LOL, I feel like I am losing my mind, or missing something completely (doing the same thing expecting a different result). To go back to the start, my goal is to get vol0/Storage dataset, onto Backup dataset, essentially I could achieve this with cp but that'll take babysitting and if it falls over then there's a problem.

Yes, I followed your steps above first - Step 3 in your post is what applies to me. Get data from A and copy to B, that's all.

My SSH sessions seem to time out even if my laptop doesn't sleep - so that'll quit the zfs send/recv command. The GUI shell isn't very practical, so I couldn't tell why it stopped last time.

Now, I am going to use tmux so I can run the command, and come back to check it without fear that I'll either lose the SSH session or lose the buffer of what's happened (to work out why the dest dataset is always empty).

The steps I am about to take are;
1. Create recursive snapshot of the dataset I want to move in the GUI (vol0/Storage).
2. I end up with vol0/Storage@bckp, it's 0KB for now until changes happen.
3. Open up a shell and with tmux run 'zfs send -vR vol0/Storage@bckp | zfs recv -vF Backup'
4. I get the following output, which seems like something is happening;
[root@zion] /# zfs send -vR vol0/Storage@bckp | zfs recv -vF Backup
send from @ to vol0/Storage@bckp estimated size is 3.51T
total estimated size is 3.51T
TIME SENT SNAPSHOT
receiving full stream of vol0/Storage@bckp into Backup@bckp
13:21:29 110M vol0/Storage@bckp
13:21:30 376M vol0/Storage@bckp
13:21:31 653M vol0/Storage@bckp
etc...

So, now it's doing that. The way I read that output is it's moving vol0/Storage@bckp snapshot, and creating a new snapshot Backup@bckp. Now, I assume that if it's making a snapshot Backup@bckp, I will need to restore that snapshot to the Backup dataset? Or, will the dataset just be creating and automounted and become browseable in the shell and etc (just like vol0/Storage is mounted and browseable from the shell)?

What would you expect to happen when the above is finished?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
So before, you were replicating into backup/data, but now it's just going to backup. is data a dataset in backup? Can you provide a full "zfs list" in code tags.
Now, I assume that if it's making a snapshot Backup@bckp, I will need to restore that snapshot to the Backup dataset? Or, will the dataset just be creating and automounted and become browseable in the shell and etc (just like vol0/Storage is mounted and browseable from the shell)?
These two statements are not either/or. And the first is wrong, and the second is maybe.

Once the replication completes successfully, /backup will have a copy of all data and datasets from vol0/Storage. There is no need (nor ability, for that matter) to "restore".

The second statement is a "maybe", because I don't remember if the replicated dataset will auto mount. You can check the property with "zfs get all backup" and look for the mountpoint option. If it's listed, but you don't see the data, you might have to use the "zfs mount backup" command. If it's not mounted, you can also use the GUI to "import" the backup pool. But at this point, you don't really care if it's mounted, as long as you are confident that the data is stored there (see "zfs list").
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Hi, yes before I was trying to replicate into Backup/Data without -F option. This time, I tried replicating just into Backup with -F added.

Here is my complete zfs list output. In a tmux window there is the zfs send/recv going on as per my last post. Excitement because the Backup dataset has 79GB used! I read this as that the zfs send/recv is working, and as of the below output, had moved 79GB.

Code:
[root@zion] /# zfs list
NAME                                                                    USED  AVAIL  REFER  MOUNTPOINT
Backup                                                                 79.0G  3.49T   144K  /mnt/Backup
vol0                                                                   3.54T  28.2G   304K  /mnt/vol0
vol0/.system                                                           2.97M  28.2G   234K  /mnt/vol0/.system
vol0/.system/cores                                                      192K  28.2G   192K  /mnt/vol0/.system/cores
vol0/.system/rrd-75ba5e8e417d4fedb54dd506e2b7dca1                       192K  28.2G   192K  /mnt/vol0/.system/rrd-75ba5e8e417d4fedb54dd506e2b7dca1
vol0/.system/samba4                                                     623K  28.2G   623K  /mnt/vol0/.system/samba4
vol0/.system/syslog-75ba5e8e417d4fedb54dd506e2b7dca1                   1.76M  28.2G  1.76M  /mnt/vol0/.system/syslog-75ba5e8e417d4fedb54dd506e2b7dca1
vol0/Storage                                                           3.50T  28.2G  3.50T  /mnt/vol0/Storage
vol0/jails                                                             41.5G  28.2G   831K  /mnt/vol0/jails
vol0/jails/.warden-template-VirtualBox-4.3.12                           774M  28.2G   774M  /mnt/vol0/jails/.warden-template-VirtualBox-4.3.12
vol0/jails/.warden-template-centos-6.4                                  441M  28.2G   441M  /mnt/vol0/jails/.warden-template-centos-6.4
vol0/jails/.warden-template-pluginjail                                  769M  28.2G   768M  /mnt/vol0/jails/.warden-template-pluginjail
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64                  769M  28.2G   769M  /mnt/vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141107083605   769M  28.2G   769M  /mnt/vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141107083605
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141110002311   769M  28.2G   769M  /mnt/vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141110002311
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141110211112   769M  28.2G   769M  none
vol0/jails/.warden-template-standard                                    273M  28.2G   273M  /mnt/vol0/jails/.warden-template-standard
vol0/jails/couchpotato_1                                               1.11G  28.2G  1.86G  /mnt/vol0/jails/couchpotato_1
vol0/jails/kodi_db1                                                     286M  28.2G   550M  /mnt/vol0/jails/kodi_db1
vol0/jails/sabnzbd_1                                                   24.9G  28.2G  25.7G  /mnt/vol0/jails/sabnzbd_1
vol0/jails/sonarr_1                                                    2.74G  28.2G  3.49G  /mnt/vol0/jails/sonarr_1
vol0/jails/transmission_1                                              7.19G  28.2G  7.93G  /mnt/vol0/jails/transmission_1


Checking back with the status of zfs send/recv ...

Code:
14:04:48   87.9G   vol0/Storage@bckp
14:04:49   87.9G   vol0/Storage@bckp
14:04:50   88.0G   vol0/Storage@bckp
14:04:51   88.0G   vol0/Storage@bckp


Seems like it's working so far? ... Not sure what was different, maybe I hadn't done a recursive snapshot, maybe the combination of zfs send -R and zfs recv -F .... the pitfalls of not knowing exactly what I was doing, and underestimating the complexity of ZFS.

Also, apologies for earlier I didn't realise there was a code snippet format in these forums.[/CODE]
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Hi, yes before I was trying to replicate into Backup/Data without -F option. This time, I tried replicating just into Backup with -F added.

Here is my complete zfs list output. In a tmux window there is the zfs send/recv going on as per my last post. Excitement because the Backup dataset has 79GB used! I read this as that the zfs send/recv is working, and as of the below output, had moved 79GB.

Code:
[root@zion] /# zfs list
NAME                                                                    USED  AVAIL  REFER  MOUNTPOINT
Backup                                                                 79.0G  3.49T   144K  /mnt/Backup
vol0                                                                   3.54T  28.2G   304K  /mnt/vol0
vol0/.system                                                           2.97M  28.2G   234K  /mnt/vol0/.system
vol0/.system/cores                                                      192K  28.2G   192K  /mnt/vol0/.system/cores
vol0/.system/rrd-75ba5e8e417d4fedb54dd506e2b7dca1                       192K  28.2G   192K  /mnt/vol0/.system/rrd-75ba5e8e417d4fedb54dd506e2b7dca1
vol0/.system/samba4                                                     623K  28.2G   623K  /mnt/vol0/.system/samba4
vol0/.system/syslog-75ba5e8e417d4fedb54dd506e2b7dca1                   1.76M  28.2G  1.76M  /mnt/vol0/.system/syslog-75ba5e8e417d4fedb54dd506e2b7dca1
vol0/Storage                                                           3.50T  28.2G  3.50T  /mnt/vol0/Storage
vol0/jails                                                             41.5G  28.2G   831K  /mnt/vol0/jails
vol0/jails/.warden-template-VirtualBox-4.3.12                           774M  28.2G   774M  /mnt/vol0/jails/.warden-template-VirtualBox-4.3.12
vol0/jails/.warden-template-centos-6.4                                  441M  28.2G   441M  /mnt/vol0/jails/.warden-template-centos-6.4
vol0/jails/.warden-template-pluginjail                                  769M  28.2G   768M  /mnt/vol0/jails/.warden-template-pluginjail
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64                  769M  28.2G   769M  /mnt/vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141107083605   769M  28.2G   769M  /mnt/vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141107083605
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141110002311   769M  28.2G   769M  /mnt/vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141110002311
vol0/jails/.warden-template-pluginjail-9.2-RELEASE-x64-20141110211112   769M  28.2G   769M  none
vol0/jails/.warden-template-standard                                    273M  28.2G   273M  /mnt/vol0/jails/.warden-template-standard
vol0/jails/couchpotato_1                                               1.11G  28.2G  1.86G  /mnt/vol0/jails/couchpotato_1
vol0/jails/kodi_db1                                                     286M  28.2G   550M  /mnt/vol0/jails/kodi_db1
vol0/jails/sabnzbd_1                                                   24.9G  28.2G  25.7G  /mnt/vol0/jails/sabnzbd_1
vol0/jails/sonarr_1                                                    2.74G  28.2G  3.49G  /mnt/vol0/jails/sonarr_1
vol0/jails/transmission_1                                              7.19G  28.2G  7.93G  /mnt/vol0/jails/transmission_1


Checking back with the status of zfs send/recv ...

Code:
14:04:48   87.9G   vol0/Storage@bckp
14:04:49   87.9G   vol0/Storage@bckp
14:04:50   88.0G   vol0/Storage@bckp
14:04:51   88.0G   vol0/Storage@bckp


Seems like it's working so far? ... Not sure what was different, maybe I hadn't done a recursive snapshot, maybe the combination of zfs send -R and zfs recv -F .... the pitfalls of not knowing exactly what I was doing, and underestimating the complexity of ZFS.

Also, apologies for earlier I didn't realise there was a code snippet format in these forums.[/CODE]
I think part of the difference from before is that you didn't have a "data" dataset under backup.

Fingers crossed!
 
Status
Not open for further replies.
Top