Migrate Data & Drives from old to new system

Status
Not open for further replies.

Tiwox

Cadet
Joined
Mar 2, 2016
Messages
5
Hi guys

I'm currently planning to upgrade several parts of my system and need your help on how to do this.
My current system is what I call the “IKEA NAS” (http://imgur.com/a/bQ7De), running a 6x4TB z2 Pool, 16GB RAM and on an Intel Atom CPU. The CPU will brick itself in the foreseeable future and the space is used up to about 85%.
Because of this I'm planning to upgrade to a proper serverrack, probably a supermicro one with 24 or more LFF drivebays.

After doing some research, I've decided to go with an 11x4TB z3 Pool, which will get a twinbrother once the need arises. This will give me around 30TB per vDev and 60TB of usable space in total.

Now comes the tricky part: how to do this transition “smart”.

I see two main options:
1) Add in a second vDev to the current system (and striping them together) and thereby solving the storage shortage and buying some time to find a proper replacement. Then getting the new system with 11 drives, migrate the current pool (2x 6x4TB z2) to the new system (the z3 pool), trashing it, build a second z3 pool and stripe them together.
2) Getting the new system with 11 drives, migrating all the stuff to the new pool and keeping the old drives to build the second vDev later.

Since this was my first build, some mistakes were made and I will not reuse the configuration but instead have a clean start. So I want to put the old drives in a new, fresh installed system.

How would I go about option 1)?
  1. Get 6 new drives
  2. Simply “slap” the two vDevs together (stripe) to extend the pool? Or is there more to it?
  3. Get the new rig with 11 new drives
  4. Detach both vDevs from the old system
  5. Take the current 12 drives, put them in the new system
  6. Attach the old vDevs
  7. Migrate the data from the old vDevs to the new vDev
  8. Destroy the old vDevs
  9. Build a second vDev with 11 drives
  10. Stripe both new vDevs together, resulting in one new big pool with all data and 60TB capacity
Option 2) seems similar
  1. Get the new rig with 11 new drives
  2. Detach the vDev from the old system
  3. Take the current 6 drives, put them in the new system
  4. Attach the old vDev
  5. Migrate the data
So, is it really "that easy"? What precautions should I take (snapshots, etc.)? Is striping two (identical) vdevs really that easy / plug&play?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Yep super easy just use the volume manager to 'extend' your pool.
Use zfs send/recv to move the data around if you do it over the network. If it's done locally use mv, since it's the fastest.


Sent from my Nexus 5X using Tapatalk
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
How would I go about option 1)?
  1. Get 6 new drives
  2. Simply “slap” the two vDevs together (stripe) to extend the pool? Or is there more to it?
  3. Get the new rig with 11 new drives
  4. Detach both vDevs from the old system
  5. Take the current 12 drives, put them in the new system
  6. Attach the old vDevs
  7. Migrate the data from the old vDevs to the new vDev
  8. Destroy the old vDevs
  9. Build a second vDev with 11 drives
  10. Stripe both new vDevs together, resulting in one new big pool with all data and 60TB capacity
I might be missing something, but I don't see the point in slapping/striping another vdev into the Ikea system first. Why not just build your 11-drive RaidZ3 in the new system, then migrate the data to it, then at some point use the old drives to build a new 11-drive RaidZ3. Migrating can be done internally by moving the old pool to the new system first (using mv, as SweetAndLow suggests, or with internal snapshot/replication), as you suggest, or over your local network with snapshot/replication.

By the way, it was fun to look at your Ikea build.
 

Tiwox

Cadet
Joined
Mar 2, 2016
Messages
5
Use zfs send/recv to move the data around if you do it over the network. If it's done locally use mv, since it's the fastest.

Are there any downsides (fragmentation, etc.) about mv? Would ZFS prefer to use a replication task or is mv just as fine as the ZFS built in tools?


Why not just build your 11-drive RaidZ3 in the new system, then migrate the data to it, then at some point use the old drives to build a new 11-drive RaidZ3.

That would be the preferred option, but acquiring the new hardware turned out to be way more complicated than expected and I can't say exactly when it will arrive. Adding a second vdev to the old system would simply buy me some time, since I expect to hit the 90% of storage used "cliff" within the next four weeks.

By the way, it was fun to look at your Ikea build.

Thank you =)
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Are there any downsides (fragmentation, etc.) about mv? Would ZFS prefer to use a replication task or is mv just as fine as the ZFS built in tools?




That would be the preferred option, but acquiring the new hardware turned out to be way more complicated than expected and I can't say exactly when it will arrive. Adding a second vdev to the old system would simply buy me some time, since I expect to hit the 90% of storage used "cliff" within the next four weeks.



Thank you =)
No downsides to mv, it's just moving data. One advantage is that if you have to stop it you can start where you left off unlike send/recv.

Sent from my Nexus 5X using Tapatalk
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
No downsides to mv, it's just moving data. One advantage is that if you have to stop it you can start where you left off unlike send/recv.
I'm interested to know how that works. It's not mentioned in the man page. In fact, it says the first thing mv does is remove the destination if it exists:
Code:
As	the rename(2) call does	not work across	file systems, mv uses cp(1)
	 and rm(1) to accomplish the move.	The effect is equivalent to:

	   rm -f destination_path && \
	   cp -pRP source_file destination && \
	   rm -rf source_file

How would you do that, stop it by control-c, and later do the command again to resume?
 

Tiwox

Cadet
Joined
Mar 2, 2016
Messages
5
Thanks for all your replies!

How about cp compared to mv and afterwards rm -rf? Wouldn't this be safer if there is an unforeseen interruption? This way you could check the data integrity of the copy (e.g. with an md5 check)
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
I would agree. You would probably want to use the options that mv does, cp -pRP

Also, I wonder how datasets would work. I expect you would have to create the datasets in the new volume, then copy the stuff over to them.
 

Tiwox

Cadet
Joined
Mar 2, 2016
Messages
5
Also, I wonder how datasets would work. I expect you would have to create the datasets in the new volume, then copy the stuff over to them.

I'd probably create the datasets first, and then run cp -pRPv /mnt/<old>/<Dataset>/* /mnt/<new>/<Dataset>/* (and wait for two days to finish)
 
Status
Not open for further replies.
Top