Linwood Ferguson
Cadet
- Joined
- Sep 4, 2016
- Messages
- 8
I hope to do a home build once I finish freeing up real hardware. To learn FreeNas I tried setting up a small environment on Hyper-V, and am stuck. I know Windows well, ubuntu a bit, first time in BSD.
The Hyper-V is on Windows 10 x 64 AU so it shows as a version 8 Hyper-V, with Generation 1 setup (legacy boot).
I started with 12G ram and a 127G single boot volumne, and added 4 dynamic 127G vhdx drives, created new.
Booted from the 9.10.1.iso in the (virtual) CD and installed on one drive -- worked fine (well, it worked, no obvious issues), and booted. All 5 disks were shown as available in the install process.
When the GUI came up there are no available drives shown. I think the issue lies in this part of the boot process, first it sees the drives:
then it appears to get rid of them:
These are empty VHDX's, created for this purpose, so they do not have prior data (I saw some comments on that causing issues).
smartctl -- scan does not see them, but I presume that's more about them being destroyed above, and they aren't in /dev I presume for the same reason.
I found a similar problem here but the solution was basically "don't do that", and really did not address the issue (unless it was about dynamic disks?). This is just for education, I get that it's not wise to visualize it for real; the real hardware is still in use for a week or two.
I'm able to move around in ubuntu, this is my first BSD exposure, so apologies for likely not providing key info.
Any hints as to what to try?
The Hyper-V is on Windows 10 x 64 AU so it shows as a version 8 Hyper-V, with Generation 1 setup (legacy boot).
I started with 12G ram and a 127G single boot volumne, and added 4 dynamic 127G vhdx drives, created new.
Booted from the 9.10.1.iso in the (virtual) CD and installed on one drive -- worked fine (well, it worked, no obvious issues), and booted. All 5 disks were shown as available in the install process.
When the GUI came up there are no available drives shown. I think the issue lies in this part of the boot process, first it sees the drives:
Code:
Sep 4 05:59:36 freenas da1 at storvsc1 bus 0 scbus3 target 0 lun 0 Sep 4 05:59:36 freenas da1: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device Sep 4 05:59:36 freenas da1: 300.000MB/s transfers Sep 4 05:59:36 freenas da1: Command Queueing enabled Sep 4 05:59:36 freenas da1: 130048MB (266338304 512 byte sectors) Sep 4 05:59:36 freenas da2 at storvsc1 bus 0 scbus3 target 0 lun 1 Sep 4 05:59:36 freenas da2: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device Sep 4 05:59:36 freenas da2: 300.000MB/s transfers Sep 4 05:59:36 freenas da2: 130048MB (266338304 512 byte sectors) Sep 4 05:59:36 freenas da3 at storvsc1 bus 0 scbus3 target 0 lun 3 Sep 4 05:59:36 freenas da3: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device Sep 4 05:59:36 freenas da3: 300.000MB/s transfers Sep 4 05:59:36 freenas da3: 130048MB (266338304 512 byte sectors) Sep 4 05:59:36 freenas da4 at storvsc1 bus 0 scbus3 target 0 lun 2 Sep 4 05:59:36 freenas da4: <Msft Virtual Disk 1.0> Fixed Direct Access SPC-3 SCSI device Sep 4 05:59:36 freenas da4: 300.000MB/s transfers Sep 4 05:59:36 freenas da4: 130048MB (266338304 512 byte sectors) Sep 4 05:59:36 freenas GEOM_RAID5: Module loaded, version 1.3.20140711.62 (rev f91e28e40bf7)
then it appears to get rid of them:
Code:
Sep 4 05:59:40 freenas devd: Executing '[ -e /tmp/.sync_disk_done ] && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/lo cal/www/freenasUI/tools/sync_disks.py && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/local/www/freenasUI/tools/smart_a lert.py -d da1' Sep 4 05:59:40 freenas da1 at storvsc1 bus 0 scbus3 target 0 lun 0 Sep 4 05:59:40 freenas da1: <Msft Virtual Disk 1.0> detached Sep 4 05:59:40 freenas (da1:storvsc1:0:0:0): Periph destroyed Sep 4 05:59:41 freenas da2 at storvsc1 bus 0 scbus3 target 0 lun 1 Sep 4 05:59:41 freenas da2: <Msft Virtual Disk 1.0> detached Sep 4 05:59:41 freenas (da2:storvsc1:0:0:1): Periph destroyed Sep 4 05:59:41 freenas da3 at storvsc1 bus 0 scbus3 target 0 lun 3 Sep 4 05:59:41 freenas da3: <Msft Virtual Disk 1.0> detached Sep 4 05:59:41 freenas (da3:storvsc1:0:0:3): Periph destroyed Sep 4 05:59:41 freenas da4 at storvsc1 bus 0 scbus3 target 0 lun 2 Sep 4 05:59:41 freenas da4: <Msft Virtual Disk 1.0> detached Sep 4 05:59:41 freenas (da4:storvsc1:0:0:2): Periph destroyed Sep 4 05:59:41 freenas devd: Executing '[ -e /tmp/.sync_disk_done ] && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/lo cal/www/freenasUI/tools/sync_disks.py && LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python /usr/local/www/freenasUI/tools/smart_a lert.py -d da2'
These are empty VHDX's, created for this purpose, so they do not have prior data (I saw some comments on that causing issues).
smartctl -- scan does not see them, but I presume that's more about them being destroyed above, and they aren't in /dev I presume for the same reason.
I found a similar problem here but the solution was basically "don't do that", and really did not address the issue (unless it was about dynamic disks?). This is just for education, I get that it's not wise to visualize it for real; the real hardware is still in use for a week or two.
I'm able to move around in ubuntu, this is my first BSD exposure, so apologies for likely not providing key info.
Any hints as to what to try?