Migrating data from old FreeNAS to new FreeNAS system over LAN - easiest way?

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
(I originally asked this here https://www.reddit.com/r/freenas/comments/bfob7g/problem_sending_zfs_stream_from_one_system_to/ but then realised this might be better for FreeNAS specific things).

I'm trying to migrate all my data from one FreeNAS system to another over the local LAN.

Both hosts are running FreeNAS-11.2-U3.

At first, I figured I'd try using zfs send/receive, since I figured the performance would be better compared to rsync.

I made a snapshot on the source machine.

zfs snapshot -r datastore-naulty-place@migrate

I then tried this - on the receiving side:
Code:
nc -l 3333 | \
  mbuffer -q -s 128k -m 1G | \
  pv -rtab | \
  zfs receive -vF naulty-datastore


On the sending side:
Code:
zfs send -R datastore-naulty-place@migrate |\
   mbuffer -q -s 128k -m 1G | \
   pv -b | \
   nc 10.5.0.39  3333


However, I get the following error messages:
Code:
cannot unmount '/var/db/system': Device busy
64.0KiB 0:00:10 [5.82KiB/s] [5.82KiB/s]
mbuffer: error: outputThread: error writing to <stdout> at offset 0x30000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe

The first error (cannot unmount '/var/db/system': Device busy) seems to be caused by FreeNAS using the zpool naulty-datastore as it's System Dataset - not sure how to get around this.

Anyhow, I then re-created the datasets one by one under this zpool, and I'm migrating them one by one. However, mbuffer still didn't work - and gave me the same error messages.

The lines that did work:
Code:
root@freenas[/mnt]# nc -l 3333 | \
  pv -rtab | \
  zfs receive -vF naulty-datastore/ablage

and
Code:
zfs send -R datastore-naulty-place/ablage@migrate | \
   pv -b | \
   nc 10.5.0.39  3333

However, the final performance is patchy - it starts off like this:
Code:
18.6GiB 0:02:52 [ 112MiB/s] [ 110MiB/s]

and then goes to this:
Code:
33.9GiB 0:11:33 [0.00 B/s] [50.0MiB/s]

Those are Megabits, right? That seems awfully...slow?

Is there a better way to migrate data easily from one FreeNAS 11 system to another over the local LAN?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
At first, I figured I'd try using zfs send/receive, since I figured the performance would be better compared to rsync.
It is, but sending it over the network is quite slow, because the network is slow.
Is there a better way to migrate data easily from one FreeNAS 11 system to another over the local LAN?
Are the systems able to be placed close enough together that, all the drives could be temporarily connected to one system? The fastest way to go is a direct pool to pool copy with all the drives attached to one system. I have done this several times over the years. Most recently with a new system at work. The old system housed about 230TB of data and we needed a copy of all of it on the new system. Over the network it would have taken more than twice as long. Even with all the disks connected to one system, it still took 16 days to copy. We bought a SAS controller and set of 9' cables purely for the purpose of migrating that data.
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Hmm, it might be tricky.

The new system (SuperMicro A2SDi-H-TP4F) has 12 SATA ports. I am using 8 of those for the new 8-drive RAID-Z1 array.

The old system (SuperMicro A2SDi-8C+-HLN4F) is using a 6-drive RAID-Z1 array.

Hence, I don't have enough SATA ports on the new system to directly connect the drives. I'd have to source a new HBA card, right? (And of course, I only have a single PCIe slot, which is currently occupied by an Intel Optane drive).

I've read again, and it seems MiB is "Mebibyte) - which is similar to Megabytes (but base 2). So the performance started off near the theoretical capacity of the network, then for some reason fell off.

The actual amount of data to copy over is around 30 TB, I believe - that should be manageable over a 10Gb network, right? I do have a 10Gb switch, the new system has a 10Gb NIC inbuilt, and the old one I can get a 10Gb NIC.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Hmm, it might be tricky.

The new system (SuperMicro A2SDi-H-TP4F) has 12 SATA ports. I am using 8 of those for the new 8-drive RAID-Z1 array.

The old system (SuperMicro A2SDi-8C+-HLN4F) is using a 6-drive RAID-Z1 array.

Hence, I don't have enough SATA ports on the new system to directly connect the drives. I'd have to source a new HBA card, right? (And of course, I only have a single PCIe slot, which is currently occupied by an Intel Optane drive).

I've read again, and it seems MiB is "Mebibyte) - which is similar to Megabytes (but base 2). So the performance started off near the theoretical capacity of the network, then for some reason fell off.

The actual amount of data to copy over is around 30 TB, I believe - that should be manageable over a 10Gb network, right? I do have a 10Gb switch, the new system has a 10Gb NIC inbuilt, and the old one I can get a 10Gb NIC.
You won't need a switch, just direct connect the two servers together on 10Gbe. Depending on your send/receive pool speeds, line-rate 10Gbe is why people pipe through netcat instead of the default ssh zfs send.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Why not run this command from the host over SSH?

zfs send -Rvv datastore-naulty-place@migrate | ssh -i /data/ssh/replication root@10.5.0.39 zfs receive -vvF naulty-datastore/ablage
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Why not run this command from the host over SSH?
SSH creates overhead because it encrypts the data. On a trusted network, you can send the data unencrypted using nc (NetCat) and the transfer runs much faster.
 

msbxa

Contributor
Joined
Sep 21, 2014
Messages
151
I used rsync

rsync -av /mnt/tank/media/Downloads/movies root@192.168.100.2:/mnt/tank/media/Movies/
 
Top