- Joined
- Nov 25, 2013
- Messages
- 7,776
Someone asked me in private conversation about using mutiple NVMe drives on an add-on card like the Supermicro AOC-SLG3-2M and the implications for passthrough in a hypervisor/TrueNAS hybrid context, e.g. ESXi. Confusion arose about the general recommendation never to pass individual drives but only HBAs as PCIe devices.
The person was concerned because the mentioned card itself does not show up as a separate device. It's a mostly passive bus adaptor (modulo some resistors and capacitors for signal stability and a handful of active components).
What is from an "advanced home lab" point of view an absolutely incredible feature of NVMe technology is that each "disk" is its own PCIe device. There is no controller or HBA and connected drives here. Theoretically that makes each individual SSD more expensive but in this industry we have long learned that uniform interfaces and economics of scale beat number of components every single time. And probably the SSD controller and the PCIe interface have long been merged into a single chip or at least chip set - otherwise the prices for "prosumer" NVMe SSDs would not be possible.
What this means in the end is that you can cram this mainboard and the aforementioned card into this chassis - ok you should add an active CPU cooler, but that is also available. Then add a SATA 2.5" SSD of sufficient size to install ESXi including a small datastore for VM images.
Add three M.2 NVMe SSDs of your choice and you can run
Or just a single VM with three SSDs and two network interfaces so both ESXi and TrueNAS can use link aggregation.
Or whatever suits your fancy.
When we get chassis with U.2 slots that are not 19" and ridiculously deep and loud and power hungry ... we can finally put the HBA discussion to rest.
Just my thoughts after that private exchange.
The person was concerned because the mentioned card itself does not show up as a separate device. It's a mostly passive bus adaptor (modulo some resistors and capacitors for signal stability and a handful of active components).
What is from an "advanced home lab" point of view an absolutely incredible feature of NVMe technology is that each "disk" is its own PCIe device. There is no controller or HBA and connected drives here. Theoretically that makes each individual SSD more expensive but in this industry we have long learned that uniform interfaces and economics of scale beat number of components every single time. And probably the SSD controller and the PCIe interface have long been merged into a single chip or at least chip set - otherwise the prices for "prosumer" NVMe SSDs would not be possible.
What this means in the end is that you can cram this mainboard and the aforementioned card into this chassis - ok you should add an active CPU cooler, but that is also available. Then add a SATA 2.5" SSD of sufficient size to install ESXi including a small datastore for VM images.
Add three M.2 NVMe SSDs of your choice and you can run
- ESXi
- 3 (!) TrueNAS SCALE VMs with
- one NVMe SSD passed through as a PCIe device to each VM
- one network interface passed through as a PCIe device to each VM
Or just a single VM with three SSDs and two network interfaces so both ESXi and TrueNAS can use link aggregation.
Or whatever suits your fancy.
When we get chassis with U.2 slots that are not 19" and ridiculously deep and loud and power hungry ... we can finally put the HBA discussion to rest.
Just my thoughts after that private exchange.