SOLVED New testing install, 9.10.1 on Hyper-V, can't see disks (Workaround = 10 beta)

Status
Not open for further replies.
Joined
Sep 4, 2016
Messages
8
I hope to do a home build once I finish freeing up real hardware. To learn FreeNas I tried setting up a small environment on Hyper-V, and am stuck. I know Windows well, ubuntu a bit, first time in BSD.

The Hyper-V is on Windows 10 x 64 AU so it shows as a version 8 Hyper-V, with Generation 1 setup (legacy boot).

I started with 12G ram and a 127G single boot volumne, and added 4 dynamic 127G vhdx drives, created new.

Booted from the 9.10.1.iso in the (virtual) CD and installed on one drive -- worked fine (well, it worked, no obvious issues), and booted. All 5 disks were shown as available in the install process.

When the GUI came up there are no available drives shown. I think the issue lies in this part of the boot process, first it sees the drives:

Code:
Sep  4 05:59:36 freenas da1 at storvsc1 bus 0 scbus3 target 0 lun 0                                                                 
Sep  4 05:59:36 freenas da1: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device                                         
Sep  4 05:59:36 freenas da1: 300.000MB/s transfers                                                                                 
Sep  4 05:59:36 freenas da1: Command Queueing enabled                                                                               
Sep  4 05:59:36 freenas da1: 130048MB (266338304 512 byte sectors)                                                                 
Sep  4 05:59:36 freenas da2 at storvsc1 bus 0 scbus3 target 0 lun 1                                                                 
Sep  4 05:59:36 freenas da2: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device                                         
Sep  4 05:59:36 freenas da2: 300.000MB/s transfers                                                                                 
Sep  4 05:59:36 freenas da2: 130048MB (266338304 512 byte sectors)                                                                 
Sep  4 05:59:36 freenas da3 at storvsc1 bus 0 scbus3 target 0 lun 3                                                                 
Sep  4 05:59:36 freenas da3: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device                                         
Sep  4 05:59:36 freenas da3: 300.000MB/s transfers                                                                                 
Sep  4 05:59:36 freenas da3: 130048MB (266338304 512 byte sectors)                                                                 
Sep  4 05:59:36 freenas da4 at storvsc1 bus 0 scbus3 target 0 lun 2                                                                 
Sep  4 05:59:36 freenas da4: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device                                         
Sep  4 05:59:36 freenas da4: 300.000MB/s transfers                                                                                 
Sep  4 05:59:36 freenas da4: 130048MB (266338304 512 byte sectors)                                                                 
Sep  4 05:59:36 freenas GEOM_RAID5: Module loaded, version 1.3.20140711.62 (rev f91e28e40bf7)     


then it appears to get rid of them:

Code:
Sep  4 05:59:40 freenas devd: Executing '[ -e /tmp/.sync_disk_done ] && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/lo
cal/www/freenasUI/tools/sync_disks.py && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/local/www/freenasUI/tools/smart_a
lert.py -d da1'                                                                                                                     
Sep  4 05:59:40 freenas da1 at storvsc1 bus 0 scbus3 target 0 lun 0                                                                 
Sep  4 05:59:40 freenas da1: <Msft Virtual Disk 1.0> detached                                                                       
Sep  4 05:59:40 freenas (da1:storvsc1:0:0:0): Periph destroyed                                                                     
Sep  4 05:59:41 freenas da2 at storvsc1 bus 0 scbus3 target 0 lun 1                                                                 
Sep  4 05:59:41 freenas da2: <Msft Virtual Disk 1.0> detached                                                                       
Sep  4 05:59:41 freenas (da2:storvsc1:0:0:1): Periph destroyed                                                                     
Sep  4 05:59:41 freenas da3 at storvsc1 bus 0 scbus3 target 0 lun 3                                                                 
Sep  4 05:59:41 freenas da3: <Msft Virtual Disk 1.0> detached                                                                       
Sep  4 05:59:41 freenas (da3:storvsc1:0:0:3): Periph destroyed                                                                     
Sep  4 05:59:41 freenas da4 at storvsc1 bus 0 scbus3 target 0 lun 2                                                                 
Sep  4 05:59:41 freenas da4: <Msft Virtual Disk 1.0> detached                                                                       
Sep  4 05:59:41 freenas (da4:storvsc1:0:0:2): Periph destroyed                                                                     
Sep  4 05:59:41 freenas devd: Executing '[ -e /tmp/.sync_disk_done ] && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/lo
cal/www/freenasUI/tools/sync_disks.py && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/local/www/freenasUI/tools/smart_a
lert.py -d da2'                               


These are empty VHDX's, created for this purpose, so they do not have prior data (I saw some comments on that causing issues).

smartctl -- scan does not see them, but I presume that's more about them being destroyed above, and they aren't in /dev I presume for the same reason.

I found a similar problem here but the solution was basically "don't do that", and really did not address the issue (unless it was about dynamic disks?). This is just for education, I get that it's not wise to visualize it for real; the real hardware is still in use for a week or two.

I'm able to move around in ubuntu, this is my first BSD exposure, so apologies for likely not providing key info.

Any hints as to what to try?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Yeah, I was one of the people that replied to that other thread. Basically, as far as I know FreeNAS doesn't work properly in Hyper-V. If you want to install it as a VM for evaluation/testing, then try to do so with VMWare. Preferably vSphere 6.0 U2 (a real Hypervisor).

*** While I do have both ESXi (vSphere) and MS Server 2012 R2 DataCenter I personally have not tried to run FreeNAS as a VM in Hyper-V. Never really needed/wanted to try it personally. Out of curiosity, I may just give it a whirl to see so I have first hand knowledge.

You may be able to do it with VMWare (Workstation or Player), VirtualBox, ProxMox or other; but I do not have first hand experience with them and am not 100% sure.

smartctl -- scan does not see them, but I presume that's more about them being destroyed above, and they aren't in /dev I presume for the same reason.
Nah, the fact is that it is a Virtual Disk thus doesn't have any SMART capabilities. If one were to pass-through a HBA/Controller and allow FreeNAS to have direct access to the Hardware/Disks then it could obtain SMART Data.

If you really want to keep trying, then I would say instead of making the disks VHDX; try to make them VHD (older format) to see if that works...
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
FWIW, I run FreeNAS on Oracle's free VirtualBox and it works very well.
 
Joined
Sep 4, 2016
Messages
8
I tried again, from scratch, with old style, fixed size VHD's, and a fresh install. Same issue -- it sees the drive while booting, but then for some reason removes them so they aren't available to use.

I hate putting another hypervisor on this machine, it's just about 5 days old, and nicely stable, and am sure I will be using Hyper-V. Right or wrong, I figure the more low level software you install, the more interactions and potential future instabilities. So playing musical chairs with backups and cloud storage before I tear down the old system to build a NAS box. Takes time.

In the meantime if anyone has any ideas how to get a play version running on HyperV let me know. Maybe I'll try V10. Not for production, but I assume a lot is similar and maybe it has some changes that accidentally fix it on hyperv.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
If you have actual physical disk(s); you *may* be able to pass them through to the VM. This *might* get you around the issue of a VHD/VHDX.

I am too lazy to type it all up, so I will link you to an article that may help: http://www.servethehome.com/hyperv-disk-passthrough-quick-guide/

*** Have not tried it and not even sure if it is available to do on Windows OS (Non-Server Versions) so YMMV.
 
Joined
Sep 4, 2016
Messages
8
I know how to do that, but at the moment am short of disks as my prior workstation is sitting idle waiting to become a NAS, while my new workstation is being set up and configured.

I tried the Beta 10 and it worked perfectly (so far). So this appears to be something "fixed" whether on purpose or accidentally in the new version. While I see the UI is rearranged (and I had to tickle it to get a IP from DHCP), I suspect for the minor things I plan to do I can use this for education, even if 10 won't be ready for a while.
 
Joined
Sep 4, 2016
Messages
8
BTW, Welcome!
Thank you.

And thanks for trying to help.

I've already done a backup run (with goodsync) against a cifs share, worked very nicely. It's wide open, can now look into security, etc., but it's working.

I guess there's no chance 10 will be ready in the next week or two? ;)
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Joined
Sep 4, 2016
Messages
8
One can wish, devs appear to be hammering away at an awesome pace (judging by the bug tracker); but they are only human.

Yeah, the UI is not ready for prime time. Really weird things when I tried to remove a disk and then replace it. Maybe HyperV related I guess, but I got it to work with zpool commands. So will stick to 9.x when ready.

Although... in looking through this, I'm going to need maybe a whole 2% of the capabilities. Thinking I might just put zfs on ubuntu with samba and be done with it. All I really want is a big external disk for nightly backups.
 
Joined
Sep 4, 2016
Messages
8
According to the roadmap, the release is scheduled for 02/01/2017.

Yeah, hadn't seen that date but saw the november date for Beta 2, so it was a lament more than a real question. But thanks.
 
Status
Not open for further replies.
Top