Moving to larger hard drives

okynnor

Explorer
Joined
Mar 14, 2019
Messages
71
Hi,

We want to move to larger hard drives on our FreeNAS. That's from 2TB to 4TB. We have been able to copy some data sets but the data set that's configured as iSCSI isn't even viewable with an 'ls -l' command. Is there a way to 1. view the data set and 2. move the dataset off like the rest?
 
Joined
Oct 18, 2018
Messages
969
Hi @okynnor. Perhaps a bit of a tangential question, but why not resilver each of your 2TB dries with a 4TB and avoid having to move the data between datasets?
 
Joined
Oct 18, 2018
Messages
969
No, I am not. I'm saying that you could, if you chose to and had enough bays, increase the size of your vdev (and therefore you pool) by inserting a 4TB drive and then replacing (resilvering) one of your 2TB disks with the new 4TB disk. Do this with each 4TB disk until all of the 2TB disks are replaced. This will results in a vdev composed of the 4TB disks and all the additional space that comes with the larger drives without requiring that you migrate data between vdevs. Check the User Guide under "Resilvering a drive to grow a pool" or something like that.
 

okynnor

Explorer
Joined
Mar 14, 2019
Messages
71
I see. I see. I have a question though, which I think could challenge this process. I have 2 vdevs each in a zmirror with each vdevs with 2 x 2TB HDDs each: volume 1 and 2. With the new 4 x 4TB HDDs, I would like to go down to just one vdev. Is it possible to go with your suggestion to get to my objective though?

It seems, though, that the forum has no one able to answer my original question so far.
 
Joined
Oct 18, 2018
Messages
969
I see. I see. I have a question though, which I think could challenge this process. I have 2 vdevs each in a zmirror with each vdevs with 2 x 2TB HDDs each: volume 1 and 2. With the new 4 x 4TB HDDs, I would like to go down to just one vdev. Is it possible to go with your suggestion to get to my objective though?
You're correct, you couldn't resilver as I suggested and change the number of disks in the vdev. For that, you'd need to create a new pool and migrate everything to the new pool.

Sorry I can't answer your original question.
 

okynnor

Explorer
Joined
Mar 14, 2019
Messages
71
I have thought of a solution. I could consider setting up a second computer to run FreeNAS. Setup the 4 bay NAS box with a new instance of FReeNAS. setup the 1 new vdev with 4 x 4TB. setup a new iSCSI LUN. Then, use XenServer iSCSI feature to add a new iSCSI instance; there will be 2 LUNs. Then tell xenserver to move the data over, which in this case are my XenServer VMs.

Seems to be a more work but if the data can't seem to be retrieved like I can from regular dataset without iSCSI, it's the only way.

What do you think?
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
ZFS send | zfs recv has worked well for me, although I've not done iSCSI, but I cant see why it wouldn't work
 

okynnor

Explorer
Joined
Mar 14, 2019
Messages
71
@blueether : That's great! I Googled your command to learn more. I found this link. I also read that snapshots are not full backups. Would it work though?
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
It looks to work as expected, just created a 1M file in a iSCSI in proxmox disk image, did a snapshot and then zfs send
Code:
root@freenas:~ # zfs list |grep VMStore
VMStore                                                         517G  2.13T    88K  /mnt/VMStore
VMStore/ISO                                                     508G  2.62T  1.72M  -
VMStore/ISOs                                                   1.27G  2.13T  1.27G  /mnt/VMStore/ISOs
VMStore/VMimages                                               1.15G  2.13T  1.15G  /mnt/VMStore/VMimages
VMStore/iocage                                                 6.41G  2.13T  4.10M  /mnt/VMStore/iocage
VMStore/iocage/download                                         272M  2.13T    88K  /mnt/VMStore/iocage/download
VMStore/iocage/download/11.2-RELEASE                            272M  2.13T   272M  /mnt/VMStore/iocage/download/11.2-RELEASE
VMStore/iocage/images                                            88K  2.13T    88K  /mnt/VMStore/iocage/images
VMStore/iocage/jails                                           5.09G  2.13T    88K  /mnt/VMStore/iocage/jails
VMStore/iocage/jails/plex                                      5.09G  2.13T   188K  /mnt/VMStore/iocage/jails/plex
VMStore/iocage/jails/plex/root                                 5.09G  2.13T  5.83G  /mnt/VMStore/iocage/jails/plex/root
VMStore/iocage/log                                               92K  2.13T    92K  /mnt/VMStore/iocage/log
VMStore/iocage/releases                                        1.05G  2.13T    88K  /mnt/VMStore/iocage/releases
VMStore/iocage/releases/11.2-RELEASE                           1.05G  2.13T    88K  /mnt/VMStore/iocage/releases/11.2-RELEASE
VMStore/iocage/releases/11.2-RELEASE/root                      1.05G  2.13T  1.05G  /mnt/VMStore/iocage/releases/11.2-RELEASE/root
VMStore/iocage/templates                                         88K  2.13T    88K  /mnt/VMStore/iocage/templates
VMStore/test                                                     88K  2.13T    88K  /mnt/VMStore/test
root@freenas:~ # zfs send VMStore/ISO@manual-20191204b | zfs recv -F VMStore/test/ISO_TEST
root@freenas:~ # zfs list | grep VMStore
VMStore                                                         517G  2.13T    88K  /mnt/VMStore
VMStore/ISO                                                     508G  2.62T  1.72M  -
VMStore/ISOs                                                   1.27G  2.13T  1.27G  /mnt/VMStore/ISOs
VMStore/VMimages                                               1.15G  2.13T  1.15G  /mnt/VMStore/VMimages
VMStore/iocage                                                 6.41G  2.13T  4.10M  /mnt/VMStore/iocage
VMStore/iocage/download                                         272M  2.13T    88K  /mnt/VMStore/iocage/download
VMStore/iocage/download/11.2-RELEASE                            272M  2.13T   272M  /mnt/VMStore/iocage/download/11.2-RELEASE
VMStore/iocage/images                                            88K  2.13T    88K  /mnt/VMStore/iocage/images
VMStore/iocage/jails                                           5.09G  2.13T    88K  /mnt/VMStore/iocage/jails
VMStore/iocage/jails/plex                                      5.09G  2.13T   188K  /mnt/VMStore/iocage/jails/plex
VMStore/iocage/jails/plex/root                                 5.09G  2.13T  5.83G  /mnt/VMStore/iocage/jails/plex/root
VMStore/iocage/log                                               92K  2.13T    92K  /mnt/VMStore/iocage/log
VMStore/iocage/releases                                        1.05G  2.13T    88K  /mnt/VMStore/iocage/releases
VMStore/iocage/releases/11.2-RELEASE                           1.05G  2.13T    88K  /mnt/VMStore/iocage/releases/11.2-RELEASE
VMStore/iocage/releases/11.2-RELEASE/root                      1.05G  2.13T  1.05G  /mnt/VMStore/iocage/releases/11.2-RELEASE/root
VMStore/iocage/templates                                         88K  2.13T    88K  /mnt/VMStore/iocage/templates
VMStore/test                                                   1.81M  2.13T    88K  /mnt/VMStore/test
VMStore/test/ISO_TEST                                          1.72M  2.13T  1.72M  -

1575440690818.png
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
just changed the path in the iSCSI setup on FreeNAS and reloaded the VM in proxmox and the two files I created were there

Do your own testing ofcause
 

okynnor

Explorer
Joined
Mar 14, 2019
Messages
71
I notice that the space used between the original and the copy are not identical though.
 

okynnor

Explorer
Joined
Mar 14, 2019
Messages
71
@blueether, the move was successful as per your steps/instructions. My VMs are now all up and running again. WoW.

Now that all my data is moved to on single zmirror volume, I would like to introduce the 4 x 4TB NAS drives. I would assume that I would follow the idea from @PhiloEpisteme. I would introduce the first 4TB NAS drive, make a volume, and move all the data out of the zMirror drives to a new zvol then slowly introduce the remaining drives order to make the zvol bigger in size (stripping) until the total 16TB drives are in. Is this the right concept?
 

okynnor

Explorer
Joined
Mar 14, 2019
Messages
71
I thought I would share what I have decided to do and how I moved the data over.

I decided to go with 2 vdevs x 1 zpool. Each of the vdevs are setup in a mirror arrangement. I followed the great writeup here. It explains that I get the best of both worlds in speed and performance. RaidZ2 is great by the performance with when I lose one hard drive is enough to push me away. In addition, with my chosen config, I can easily expand my storage capacity without much performance degradation.

Thirdly, because I wanted to move my data from my old drives off, it was as easy as "offlining" one of the 4 mirrored drives in order to re-insert the old zvol, then use this command

zfs send FreeNASVolume1/Media@manual-20191212 | zfs recv -F FreeNASBIGPOOL/Media

from the old to new zpool.

Once, the data is in the old one, back track my steps above, remove the old zpool, reinsert the 4th *new* 4TB hard drive, set it to online and let the pool resolver the data into the 4th hard drive.

RAIDz1 was not an option as it was the worst option in terms of performance vs. piece-of-mind. But it was the best in terms of having the most available space.

Thank you everyone who have helped me!!
 
Top