SOLVED Growing ZFS Volume, or efficient way to move data and recreate Volume

Status
Not open for further replies.

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Crossing those fingers helped me out this time. Thirty hours later ...

Code:
20:09:03   3.51T   vol0/Storage@bckp
20:09:04   3.51T   vol0/Storage@bckp
cannot receive value property on Backup: invalid property value
received 3.51TB stream in 110887 seconds (33.2MB/sec)


Not sure about that property value error, but the data is browseable from the shell and it all looks there ...

Code:
NAME                                                                    USED  AVAIL  REFER  MOUNTPOINT
Backup                                                                 3.50T  71.6G  3.50T  /mnt/Backup
vol0/Storage                                                           3.50T  28.2G  3.50T  /mnt/vol0/Storage


The discrepancy in free space will be from the jails, which I choose not to backup (from what I've read they can be a pain to get back working again, not so hard to configure them again)..

Now to add drives, destroy and recreate the vol0 pool - then do the reverse of the above. Hopefully I don't make as many mistakes this time around. TMux is fantastic - I'll be using that A LOT from now on!

depasseg, I appreciate you sticking with me through this one even with my frustrations, thank you a heap.
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Yes, everything has progressed very well - full steam ahead.

One odd thing - I did the upgrade to 9.3-STABLE before doing much reconfiguration. In Volume Manager in the GUI, I see that vol0 is nested under another vol0 - kind of makes my ZFS datasets looks like vol0/vol0/Storage and vol0/vol0/jails.

zfs list looks pretty normal, so thinking it's just GUI oddness..

Code:
NAME                                                   USED  AVAIL  REFER  MOUNTPOINT
freenas-boot                                           944M  6.04G    31K  none
freenas-boot/ROOT                                      936M  6.04G    31K  none
freenas-boot/ROOT/default                              936M  6.04G   936M  legacy
freenas-boot/grub                                     7.77M  6.04G  7.77M  legacy
vol0                                                  3.50T  3.52T   486K  /mnt/vol0
vol0/.system                                          3.03M  3.52T  1.61M  legacy
vol0/.system/cores                                     230K  3.52T   230K  legacy
vol0/.system/rrd-75ba5e8e417d4fedb54dd506e2b7dca1      230K  3.52T   230K  legacy
vol0/.system/samba4                                    620K  3.52T   620K  legacy
vol0/.system/syslog-75ba5e8e417d4fedb54dd506e2b7dca1   377K  3.52T   377K  legacy
vol0/Storage                                          3.50T  3.52T  3.50T  /mnt/vol0/Storage
vol0/jails                                            22.7M  3.52T  22.7M  /mnt/vol0/jails


And, screenshot of the GUI Volume Manager;

http://pasteboard.co/tfN4mxb.png

Not sure why the available space is reported different in Volume Manager versus zfs list. The only difference was that I upgraded to 9.3, and set vol0 as the System Dataset. Prior to these two things, Volume Manager looked the same as zfs list.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I see that vol0 is nested under another vol0 - kind of makes my ZFS datasets looks like vol0/vol0/Storage and vol0/vol0/jails.
It's not just your system. That's what it looks like in 9.3 (.1?). The first vol0 is the raw pool space, and the second is the space after RAID. The second should match zfs list.
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
It's not just your system. That's what it looks like in 9.3 (.1?). The first vol0 is the raw pool space, and the second is the space after RAID. The second should match zfs list.

Okay cool. Well, all is working. Jails all setup again and in terms of FreeNAS my problems are solved.

Thanks heaps for the patience and help, depasseg.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Glad to hear it's working and happy to help. :smile:
 
Status
Not open for further replies.
Top