Migrate Physical to Virtual (Replica)

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
So currently I have my primary freenas box replicating to a old pc in the corner chewing up a fair amount of power. The disks are connected to a LSI card acting as HBA.

The plan is to migrate them to my esxi host:
  1. Take the disks and card out of existing box.
  2. Place them in my ESXI host
  3. Create a New Virtual Machine with 16gb storage for OS on existing local datastore
  4. Assign 2 vCPU and 8gb memory (ECC)
  5. Assign the LSI card in pass-through.

So what i need to know is after installing freenas:
a) do i rebuild the config from scratch inc (import the pool, setup tasks, etc)
b) do i just import the config and off to the races.

All this VM will do is store a replica of my datasets (w/ snapshots) of my mainbox. This would save me running a additional 110w~ so it seems well worth it. The alternative is for me to build a lower power unit but I would prefer to have 1 less thing running.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Just import the config. Depending on the prior set up you may have to touch the nic. I've done both with no issues at all.
 

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
Just import the config. Depending on the prior set up you may have to touch the nic. I've done both with no issues at all.
sounds like a plan. does my process above seem correct? 8gb should be fine for just replication yea?
cpus are not a problem but my host only has 40gb so I dont want to assign to much until I get more.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Assuming your hardware handles vt-d correctly. This is your best bet for virtualization. I like 12-16GB much better than 8GB. But try it and see for a replication target and no real reading so extra ARC isn't gonna help that much. Jails and whatnot add to the sweet spot. The critical point is 8GB so we don't have a glitch taking you out randomly.
 

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
Assuming your hardware handles vt-d correctly....
It's a supermicro board from lastgen and i've been using pass-through for a network card with no issues so hope its fine.

Thanks for the help anyways.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Supermicro or Intel boards are probably the safest bets out there, so I'd be comfortable. Standard disclaimer on.. your data, your decision, added complexity, yada yada.
 

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
Yep. I'll do a byte-to-byte comparison of the main data. Might just do a sha check-sum comparison of the non important stuff (general media etc). Should help me detect any major problems with data corruption between the main and backup. Unless another method is recommend.

All my super important stuff is backed up offsite anyways so not worried if i lose my top 10 cat video collect or whatever =P
 

Noctris

Contributor
Joined
Jul 3, 2013
Messages
163

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
Thanks for the reply noctris

I've virtualised FREENAS before just never migrated so wasn't a 100%sure. Only question regarding that link is there any specific testing I should do regarding PCI passthrough? I always burn in new hardware and only use sever platform gear; but never done any extensive testing regarding passthrough.

Usually if I'm testing a controller I copy a large amount of dummy data big n small files and do a binary match between the original and copy. Monitoring logs etc looking for any signs of problems. Usually the LSI controllers utils do a good job of reporting problems or there will be large differences between the copies.
 

Noctris

Contributor
Joined
Jul 3, 2013
Messages
163
Afraid i haven't done that either. I am not aware of any "burn in tool" for pci passthrough.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
There are no special tools necessary. The whole point is to use that controller natively. If the esxi box isn't burned in than do it right. It wouldn't hurt anything to grab some spare disks and pound on a pool for a while. But moving a known good controller to a known good host is pretty safe considering you are on likely the best regarded hardware.

However, I am all for paranoia. Being too careful is never a problem for your data.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks for the reply noctris

I've virtualised FREENAS before just never migrated so wasn't a 100%sure. Only question regarding that link is there any specific testing I should do regarding PCI passthrough? I always burn in new hardware and only use sever platform gear; but never done any extensive testing regarding passthrough.

You basically do all the same things you'd do for hardware (on the host platform itself in the case of CPU and memory tests), then hit your disks hard thru VT-d to make sure there's no surprises there (suggest *MINIMUM* several days of testing), then you move on to trying things like finding out if altering the number of cores might tease out any other issues (we've specifically seen that exact thing cause issues with MSI/MSIX).

Basically if you have a trustworthy and stable hardware platform, it doesn't matter if it is virtualized or bare metal. But you really do want to know how the additional complexity impacts you (hopefully: minimally) and making sure that you can back out and get access to your data on bare metal is priceless. Actually try it just to be sure.

It's a supermicro board from lastgen

Would that be X9? Because that's likely to be fine. If you mean X8 or earlier, be extra cautious. I have seen too many problems reported of weirdness on X7 and X8 boards that are supposed to be "fine" for VT-d.
 

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
You basically do all the same things you'd do for hardware (on the host platform itself in the case of CPU and memory tests), then hit your disks hard thru VT-d to make sure there's no surprises there (suggest *MINIMUM* several days of testing)....
OK that's exactly what I wanted to hear. As its only a replica backup for easy access i will just do the disk checks for a week. Should be long enough to notice any major germlines.

As mentioned the other hardware has been well n truly burn in and tested.

Basically if you have a trustworthy and stable hardware platform, it doesn't matter if it is virtualized or bare metal. But you really do want to know how the additional complexity impacts you (hopefully: minimally)
Yep. I've been using type-1 hypervisor for a couple of years now and have setup similar situations just none using zfs or freenas. research + testing is king.

Would that be X9? Because that's likely to be fine.
Indeed it is. I've had a dual-nic running in passthrough for a year or so now and not had any major issues. Never the less I will still do some further test...
 
Last edited:

Darren David

Explorer
Joined
Feb 27, 2014
Messages
54
I'm about to attempt this same thing (physical to virtual). Assuming that my jail data exists in the pool, will the jails be restored automagically as well when I load in my old config?
 
Top