bhyve virtio boot disk in Windows?

rmccullough

Patron
Joined
May 17, 2018
Messages
269
Will this patch be ported to 11.2?

iXsystems has back ported the bhyve patch to FreeNAS 11.3.
I have checked with
  • FreeNAS-11.3-RC2
  • virtio-win-0.1.164.iso
  • Windows Server 2019 Evaluation
  • virtio disk and virtio network
It works very well, and it is faster than before.
Therefore I think that Windows on FreeNAS bhyve is now a viable solution.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

bobpaul

Dabbler
Joined
Dec 20, 2012
Messages
23
now if only I could figure out how to get windows to change from ahci to virtIO boot disk...

Ugh, right? My Windows 10 VM has 2 disks, with the D:\ drive storing all the important data. Switching the D:\ drive to VirtIO has made a huge difference in usability. Even immediately during bootup... the spinner on the boot splash doesn't lag. Even though it still basically uses 100% of the vCPU cores I allocate it, it's now basically usable. But if I could convert the boot drive, maybe it wouldn't even suck...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You can. If your Windows is already running with one VirtIO disk you can just switch the boot drive, too. At least for me that worked ...
 

bobpaul

Dabbler
Joined
Dec 20, 2012
Messages
23
You can. If your Windows is already running with one VirtIO disk you can just switch the boot drive, too. At least for me that worked ...

What version of Windows are you using (or maybe more importantly, what version were you using when you converted)? I spent most of yesterday trying to do Win10 Pro 1909. D:\ converted just fine. If I switch C:\ to VirtIO then it won't boot.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Poooh ... don't remember to be honest. I added a second disk to my VM, virtio, installed the latest stable drivers, made sure I could access the second disk just fine, Then I changed the boot drive and removed the second dummy.

HTH,
Patrick
 

bobpaul

Dabbler
Joined
Dec 20, 2012
Messages
23
I got it! I did this:
  1. Attach both VirtIO CD and Windows 1909 Installation CD to the VM (VM already has 1 virtio disk and most recent drivers for balloon, NetKVM, and viostor).
  2. When the installer starts, proceed as if you're going to install. You'll get to the disk page which won't show any disks.
  3. Click the "Load Drivers" button. Uncheck "hide incompatible drivers". From the VirtIO CD, load the drivers for Balloon, NetKVM, and viostor.
  4. Your disks now show in the installer. But don't install...
  5. Click the X to close the Installer dialog. It will tell you that you'll lose your changes and return to the beginning of the installation process where it gives you the "Repair my computer" option again.
  6. Choose "Repair my computer"
  7. Choose Startup repair.
    This gave me an error and said it didn't work, so then I also went to a command prompt where I found my windows partition at G:\. I then used "bootrec /fixmbr" and "bootrec /fixboot" (both of these also gave me errors).
  8. Reboot in despair, expecting you just wasted a bunch of time again
  9. Everything works!
 

rmccullough

Patron
Joined
May 17, 2018
Messages
269
@bobpaul are you running 11.3? Or 11.2? I am on 11.2 and was not able to get this working. My understanding is that Windows 10 v1909 is only working on 11.3 after a patch was applied.

It seems to me that until I upgrade to 11.3, it does not make sense to try and run a windows VM on FreeNAS/bhyve.
 

bobpaul

Dabbler
Joined
Dec 20, 2012
Messages
23
11.3. I don't usually upgrade right away, but I made it a priority to upgrade to 11.3 when it came out to take advantage of Python3 as well as the improvements to replication. Improving the performance of our Windows VM is a nice bonus.

I expect when FreeNAS 12 rolls out I'll be editing the vm.py so I can use NVME emulation.
 

rmccullough

Patron
Joined
May 17, 2018
Messages
269
Yeah, I am waiting on upgrading. Not running anything mission critical, but I simply don't have the time to fix it right now if something goes wrong. From the little bit I was looking around it seems like 11.3 has some rough edges to smooth out first too.
 

bal0an

Explorer
Joined
Mar 2, 2012
Messages
72
Is this typical performance of Windows 10 1909 on FreeNAS 11.3-U2?
I have installed storio, NetKVM, Balloon and Qemu drivers. Disk and network are running VirtIO.
The VM disk C: maps to a 500GB SSD pool, NOT to the /mnt/raid hard disk pool visible in the screenshot.
Feels like walking in a swamp.
Anything obvious I missed?

I have a comparable Windows 10 1909 guest running under vmware ESXi 6.7.0 (Build 8169922), that feels almost like running on bare metal.

Windows 10 on FreeBSD 11.3-U2.png


FreeBSD 11.3-U2 Dashboard.png
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
How am I supposed to judge "performance" from that task manager screen shot? It does not tell if that if the 7 MB/s is the maximum you get or just what is active at the moment you took the picture. Mine shows 100k/s with the VM sitting idle ... are you running some benchmark program?

If you tell me what to run I can give you data from one of my Windows VMs, no problem. But I don't have much clue about this Windows stuff. My VMs feel "fast enough", I don't have anything to compare them to.

Patrick
 

bal0an

Explorer
Joined
Mar 2, 2012
Messages
72
Apologies for not being more specific. The Windows VM is idle except for the update download. That alone is enough to get the CPU to 100% usage.
After a few more hours of looking at the instance, in general any background process brings CPU to 100% without any other programs running. On my vmware ESXi instance, CPU load barely touches 5% when the instance is idle.

I will perform some large file transfers and post my throughput here, both for FreeNAS and ESXi.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I just ran this benchmark suite:

Results:
Bildschirmfoto 2020-04-15 um 09.18.27.png

System specs:
  • Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
  • 8 Cores, 16 Threads
  • 64 G RAM
  • VMs on a mirror pool on 2 Samsung Pro 860 SATA SSDs
  • FreeNAS 11.3-U2
VM specs:
  • 2 Cores
  • 8 G RAM
  • Virtio HDD, 4k volblocksize
  • Virtio networking
  • Windows 10 Professional 1909
  • Used via Microsoft RDP client from my Mac almost exclusively, sometimes via Apache Guacamole in a browser

As I am thinking about any pattern here:
  • check your volblocksize - if I am not mistaken it should best match NTFS' native blocksize of 4k
  • are you possibly running on AMD instead of Intel?

HTH,
Patrick
 

bal0an

Explorer
Joined
Mar 2, 2012
Messages
72
I ran the benchmark suite on:
  • Intel Core i5-6500 3.2GHz, 4 Cores, 32 GB RAM
  • Samsung 860 EVO SATA SSD
  • FreeNAS 11.3-U2
VM specs:
  • 2 Cores, 4 G RAM
  • Virtio HDD, default volblocksize (won't boot when changed to 4k)
  • Virtio networking
  • Windows 10 Pro 1909
  • Used via Microsoft RDP client and VNC Console
Windows 10 1909 on FreeNAS 11.3-U2 - exclusion.png

Windows 10 1909 on FreeNAS 11.3-U2 - benchmark.png

Then on:
  • Intel Core i5-6500 3.2GHz, 4 Cores, 32 GB RAM
  • Samsung 860 EVO SATA SSD
  • vmware ESXi 6.7
VM specs:
  • 2 Cores, 4 G RAM
  • vmware virtualization
  • Windows 10 Pro 1909
  • Used via Microsoft RDP client and ESXi Console
Windows 10 1909 on vmware ESXi 6.7.png

Observation:
  1. On bhyve single core read and mixed memory access shows VERY LOW throughput.
  2. On bhyve User Benchmark complains about "very high background CPU load". The instance sat idle during the test with only a file explorer and Task Manager open. Task Manager CPU usage was about 25% which is much higher that on bare metal or on vmware ESXi (~2-5% there).
  3. I have no idea why User Benchmark does exclude storage on bhyve from the report.

Re your comments:
  • check your volblocksize - if I am not mistaken it should best match NTFS' native blocksize of 4k
If I change the volblocksize from default to 4K boot fails and I end up in the UEFI prompt. Also file transfer rates do not seem to be an issue. While testing UserBenchmark shows throughputs in the range of 100 to 500 MB/s in the log window.
  • are you possibly running on AMD instead of Intel?
I am positive I am using Intel ;-).

Regards, Andreas
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
The Volblocksize needs to be set at creation. Mind you, not the disk blocksize in the device section of the UI but the ZFS attribute volblocksize: zfs create -V 40g -o volblocksize=4k mypool/somewhere/myvm-disk0

Then point the UI at that disk.

Changing on the fly is not possible and with the default at 16k it might be the case that your system reads 16k, changes 4k, writes 16k for every NTFS block.

You can check, though: zfs get volblocksize mypool/somewhere/myvm-xyz...

HTH,
Patrick

P.S. while you cannot change it, you can of course create a fresh disk and copy the contents without reinstalling Windows.
 

bal0an

Explorer
Joined
Mar 2, 2012
Messages
72
So I cloned my zvol from:
root@freenas:~ # zfs get volblocksize ssd850/SPF1X-jy60db
NAME PROPERTY VALUE SOURCE
ssd850/SPF1X-jy60db volblocksize 16K -

to
root@freenas:~ # zfs get volblocksize ssd850/SPF1X-4kblocks NAME PROPERTY VALUE SOURCE ssd850/SPF1X-4kblocks volblocksize 4K -

and got similar results like with 16K blocks. Attached also a screenshot of userbenchmark performing the storage test which show 10MB/s only when doing 4K read and writes.

Windows 10 1909 userbenchmark - 4k zvol blocks.png

Windows 10 1909 drive benchmark - 4k zvol blocks.png
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Sorry, no clue ...

Works on my machine - probably I should send you a Docker image of my FreeNAS installation :p
 

jglathe

Cadet
Joined
Aug 14, 2018
Messages
6
  1. Everything works!

Well, I tried this, and... nope. Quite odd, since I have another VM with everything on VirtIO and Win10 1909, fully patched, still working on the same freeNAS host. The machine that is acting up got converted to a VirtIO disk. Aand I was happy that it was working faster, with less processor load, since. I updated to FreeNAS-11.3-U2.1 this friday. With that, the trouble started. All mitigation attempts failed, but switching the disk back to AHCI at least got the machine running again. Seems to be an issue in the software stack (of win10), however this looks like reinstalling all software for the machine to get it back into shape. :confused:
 

jglathe

Cadet
Joined
Aug 14, 2018
Messages
6
Seems to be an issue in the software stack (of win10), however this looks like reinstalling all software for the machine to get it back into shape. :confused:

As a follow-up on this, I got the VM to boot with all virtio again :cool: All the "fix boot problems" guides didn't really help, the straight forward solution is arguably simple:

1. create another empty zvol
2. add this zvol as a virtio disk to your VM
3. boot, open drive management, initialize the new zvol as a new drive (mine are GPT, so I chose GPT)
4. reboot to make sure it is still working
5. shut the VM down
6. edit the boot disk to virtio
7. start the VM again - should come up after a while. I verified it with a web-VNC connected. W10 seems to change some things due to the hardware change. Shut it down again.
8. Remove the additional zvol and all other devices you don't need. Start it up again.
9. Clean up the registry, the usual stuff. Do a snapshot.

I am not quite sure what happens, but the additional virtio drive seems to ensure that the driver is actually loaded and accessible when the shift-over from ex-AHCI to virtio happens and is changed in the registry/whatever. So... a workaround.
 
Top