Hi folks,
Working on a potential NexentaStor to FreeNAS migration at work along with new hardware. Is this build sane, since it somewhat depends on a "Can I badger a supermicro seller to customize this far" sense?
Build basis is a Supermicro 6048R-E1CR60L, sold normally as a complete system only, which is a top loading 60 disk chassis.
https://www.supermicro.com.tw/products/system/4U/6048/SSG-6048R-E1CR60L.cfm
2x Xeon E5-2637 v4 3.5GHz 4 core
2x 64GB LRDIMM (1 per CPU)
--- (allow LRDIMM buildout, rather than lock in with RDIMM)
add the optional NVMe U.2 drive cage (which connects to CPU1 OCulink ports, but doesn't that mean only 4x U.2 drives?)
add the SIOM network card (thinking the quad 10G SFP+ Intel X710 based one)
Here's where things get weird. Note the system name ends with E1. Supermicro naming system implies this uses a single path backplane. Looking at their separate backplane literature however for the 30 disk backplanes shows the name BPN-SAS3-946SEL1/EL2 which implies there is a proper EL2 dual path version of the backplane (illustrations seem to confirm).
https://www.supermicro.com.tw/manuals/other/BPN-SAS3-946SEL.pdf
So one potential configuration is the typical cascading backplane setup. The illustrations implies a single HBA dual path can be done with the prebuilt system's mezzanine SAS HBA's 2 connectors only if using the EL2 backplane. But the system only has a single mezzanine slot for an HBA, and the HBA itself doesn't seem stackable so I don't understand how they could go about doing full dual path for the prebuilt system. I guess they use 2 HBA's on other motherboards or SBB modules?
So the bright idea I had, is use the normal PCIe slots, and fit two Supermicro AOC-S3216L-L16iT SAS HBA's which each have 4 miniSAS connectors, to dual path to both backplanes each. Cables galore though, and not necessarily justified because total SAS bandwidth for HDD's suggests 60x HDD would be roughly 5 SAS channels (so round up to 8 channels, or 2 miniSAS connectors). I suppose just the mezzanine card and a lower spec 2 connector HBA in one PCIe slot would cover the cascading setup with full dual path.
https://www.supermicro.com.tw/products/accessories/addon/AOC-S3216L-L16iT.cfm
This all assumes I could badger the Supermicro reseller to fit 2 BPN-SAS3-946SEL2 backplanes into the system, otherwise no dual path options.
The other weird part of the setup is to use the NVMe U.2 drive cage to host some Optane drives for the ZIL (P4800X or 900P?, and wrap that up with a Squid PCIe adapter carrier board with the x16 connector hosting up to 4 M.2 drives for L2ARC using the last remaining conventional PCIe slot (Samsung 960 PRO M.2 perhaps?).
PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules - Amfeltec
HDD selection will probably be HGST 4TB SAS to increase drive count and cut down on cost. The chassis has an additional 2x2.5 SATA drive holders, which I would like to use for the OS ZFS syspool with suitable drives, or I suppose I could also do 2xSATADOM's but the 2.5 drive cage would get wasted.
Crazy, or just entirely dependent on my reseller relationship?
PS - How much of a pain is it to import an existing iSCSI zvol from NexentaStor to FreeNAS? Ideally I want to just yank the pool from the old server and import into the new one and copy/migrate directly there. I've read some stuff about applying ZFS user tags to store deviceID before export or else ESXi complains however. Am I setting myself up for a world of pain?
Working on a potential NexentaStor to FreeNAS migration at work along with new hardware. Is this build sane, since it somewhat depends on a "Can I badger a supermicro seller to customize this far" sense?
Build basis is a Supermicro 6048R-E1CR60L, sold normally as a complete system only, which is a top loading 60 disk chassis.
https://www.supermicro.com.tw/products/system/4U/6048/SSG-6048R-E1CR60L.cfm
2x Xeon E5-2637 v4 3.5GHz 4 core
2x 64GB LRDIMM (1 per CPU)
--- (allow LRDIMM buildout, rather than lock in with RDIMM)
add the optional NVMe U.2 drive cage (which connects to CPU1 OCulink ports, but doesn't that mean only 4x U.2 drives?)
add the SIOM network card (thinking the quad 10G SFP+ Intel X710 based one)
Here's where things get weird. Note the system name ends with E1. Supermicro naming system implies this uses a single path backplane. Looking at their separate backplane literature however for the 30 disk backplanes shows the name BPN-SAS3-946SEL1/EL2 which implies there is a proper EL2 dual path version of the backplane (illustrations seem to confirm).
https://www.supermicro.com.tw/manuals/other/BPN-SAS3-946SEL.pdf
So one potential configuration is the typical cascading backplane setup. The illustrations implies a single HBA dual path can be done with the prebuilt system's mezzanine SAS HBA's 2 connectors only if using the EL2 backplane. But the system only has a single mezzanine slot for an HBA, and the HBA itself doesn't seem stackable so I don't understand how they could go about doing full dual path for the prebuilt system. I guess they use 2 HBA's on other motherboards or SBB modules?
So the bright idea I had, is use the normal PCIe slots, and fit two Supermicro AOC-S3216L-L16iT SAS HBA's which each have 4 miniSAS connectors, to dual path to both backplanes each. Cables galore though, and not necessarily justified because total SAS bandwidth for HDD's suggests 60x HDD would be roughly 5 SAS channels (so round up to 8 channels, or 2 miniSAS connectors). I suppose just the mezzanine card and a lower spec 2 connector HBA in one PCIe slot would cover the cascading setup with full dual path.
https://www.supermicro.com.tw/products/accessories/addon/AOC-S3216L-L16iT.cfm
This all assumes I could badger the Supermicro reseller to fit 2 BPN-SAS3-946SEL2 backplanes into the system, otherwise no dual path options.
The other weird part of the setup is to use the NVMe U.2 drive cage to host some Optane drives for the ZIL (P4800X or 900P?, and wrap that up with a Squid PCIe adapter carrier board with the x16 connector hosting up to 4 M.2 drives for L2ARC using the last remaining conventional PCIe slot (Samsung 960 PRO M.2 perhaps?).
PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules - Amfeltec
HDD selection will probably be HGST 4TB SAS to increase drive count and cut down on cost. The chassis has an additional 2x2.5 SATA drive holders, which I would like to use for the OS ZFS syspool with suitable drives, or I suppose I could also do 2xSATADOM's but the 2.5 drive cage would get wasted.
Crazy, or just entirely dependent on my reseller relationship?
PS - How much of a pain is it to import an existing iSCSI zvol from NexentaStor to FreeNAS? Ideally I want to just yank the pool from the old server and import into the new one and copy/migrate directly there. I've read some stuff about applying ZFS user tags to store deviceID before export or else ESXi complains however. Am I setting myself up for a world of pain?