djk29a
Dabbler
- Joined
- Dec 13, 2013
- Messages
- 13
I'm fundamentally asking what's taking up all your onboard SATA ports on a motherboard where you're needing to buy more SATA ports on top of the substantial number already. In dev, you should be getting your drives setup on the M1505 which leaves 1 for the boot SSD and 5 ports. Why are you going to put everything on the onboard SATA first when it has nothing to do with your final configuration? Just stick your local SSD-based storage to the onboard and move on. Overcomplicating setups is one of the fastest ways to create unreliable systems regardless of your budget.
While I can understand why someone might want to be able to switch between ESXi-based FreeNAS and a standard physical FreeNAS as an emergency recovery measure, I would typically run FreeNAS in a VM off of local (non-USB) storage and use maybe ESXi booting off of a USB drive (it can incur some writes for things like logging, after all). If I have an emergency, I'll pray I have a FreeNAS configuration backup, burn a new USB drive (or PXE boot, heck), and boot and reconfigure the new FreeNAS instance hoping that the IP configuration works out well (it almost certainly won't - your ESXi host's IP will be bound to the same as what used to be your FreeNAS VM, and lots of things will break probably). If anything should be backed up, it'd be that FreeNAS VM and the data on that RAIDZ. ESXi hosts are supposed to be pretty expendable to the point that if the hardware fails, you could run the VMs on that host immediately on another host - this option is hampered by using PCI passthrough mode because it binds that VM to that host entirely. And if your compute hardware fails, you'll now have to physically move the hard disks attached to the M1505 to your replacement hardware anyway.
To gloss over a lot of issues on just the virtualization layer creating potential problems, ESXi can cause all sorts of problems if you try to be clever, and even worse, you can make things terribly unreliable / unstable by actually FOLLOWING the recommended best practices that VMware would have you believe when it comes specifically to ZFS. As an example, RDMs (Raw Device Mappings) are a technique that VMware typically recommends when exposing disks to VMs so that users can get some virtualization features like snapshotting for those disks when using something like Microsoft clustering (requires direct access to drives somewhat similar to how ZFS should). Unfortunately, hypervisors and their I/O and CPU scheduling can in certain fluke scenarios (a VM stuck in a wait state for example that's waiting for a signal that it missed due to a fluke - this is only resolved by shutting everything down or vMotioning the VM to another ESXi host) re-order I/O or CPU threads that can break some assumptions that kernel writers may have made. VMware optimized RDMs for use specifically with MS clustering it appears, NOT for ZFS's concerns - this can really mess with the assumptions of writes by ZFS. This type of consideration is purely out of paranoia rather than actually cited VMware ESXi kernel knowledge / whitepapers, which is exactly what you should be doing when creating such a setup for anything resembling reliability for a business.
The reason people are scorning you is because a virtualized storage setup, while possible and is done with some safety, is really not something that people that are even just intermediate level users of virtualization can hope to plan for carefully. You are free to experiment on your own and try to learn through that but there's a great deal of considerations that go far beyond what you've asked so far. People with VCPs tend to bill customers at at least $150 / hr (lower? Who are you so I can hire you for cheap?! Seriously), this is very serious stuff with a lot of experience needed to be super confident about what you're doing, somewhere near that level of knowledge is where you should aim for when it comes to protecting your own data reliably if it's that important. I don't think we can exactly give you a tutorial on what you can and can't do with ESXi to the details necessary to give you the comfort of a solid ESXi based FreeNAS system like you desire without spending a great deal of effort. Enthusiasm is appreciated, but so is self-study before asking basic questions.
While I can understand why someone might want to be able to switch between ESXi-based FreeNAS and a standard physical FreeNAS as an emergency recovery measure, I would typically run FreeNAS in a VM off of local (non-USB) storage and use maybe ESXi booting off of a USB drive (it can incur some writes for things like logging, after all). If I have an emergency, I'll pray I have a FreeNAS configuration backup, burn a new USB drive (or PXE boot, heck), and boot and reconfigure the new FreeNAS instance hoping that the IP configuration works out well (it almost certainly won't - your ESXi host's IP will be bound to the same as what used to be your FreeNAS VM, and lots of things will break probably). If anything should be backed up, it'd be that FreeNAS VM and the data on that RAIDZ. ESXi hosts are supposed to be pretty expendable to the point that if the hardware fails, you could run the VMs on that host immediately on another host - this option is hampered by using PCI passthrough mode because it binds that VM to that host entirely. And if your compute hardware fails, you'll now have to physically move the hard disks attached to the M1505 to your replacement hardware anyway.
To gloss over a lot of issues on just the virtualization layer creating potential problems, ESXi can cause all sorts of problems if you try to be clever, and even worse, you can make things terribly unreliable / unstable by actually FOLLOWING the recommended best practices that VMware would have you believe when it comes specifically to ZFS. As an example, RDMs (Raw Device Mappings) are a technique that VMware typically recommends when exposing disks to VMs so that users can get some virtualization features like snapshotting for those disks when using something like Microsoft clustering (requires direct access to drives somewhat similar to how ZFS should). Unfortunately, hypervisors and their I/O and CPU scheduling can in certain fluke scenarios (a VM stuck in a wait state for example that's waiting for a signal that it missed due to a fluke - this is only resolved by shutting everything down or vMotioning the VM to another ESXi host) re-order I/O or CPU threads that can break some assumptions that kernel writers may have made. VMware optimized RDMs for use specifically with MS clustering it appears, NOT for ZFS's concerns - this can really mess with the assumptions of writes by ZFS. This type of consideration is purely out of paranoia rather than actually cited VMware ESXi kernel knowledge / whitepapers, which is exactly what you should be doing when creating such a setup for anything resembling reliability for a business.
The reason people are scorning you is because a virtualized storage setup, while possible and is done with some safety, is really not something that people that are even just intermediate level users of virtualization can hope to plan for carefully. You are free to experiment on your own and try to learn through that but there's a great deal of considerations that go far beyond what you've asked so far. People with VCPs tend to bill customers at at least $150 / hr (lower? Who are you so I can hire you for cheap?! Seriously), this is very serious stuff with a lot of experience needed to be super confident about what you're doing, somewhere near that level of knowledge is where you should aim for when it comes to protecting your own data reliably if it's that important. I don't think we can exactly give you a tutorial on what you can and can't do with ESXi to the details necessary to give you the comfort of a solid ESXi based FreeNAS system like you desire without spending a great deal of effort. Enthusiasm is appreciated, but so is self-study before asking basic questions.