Build idiot check

Status
Not open for further replies.

Asteroza

Dabbler
Joined
Feb 12, 2018
Messages
14
Hi folks,


Working on a potential NexentaStor to FreeNAS migration at work along with new hardware. Is this build sane, since it somewhat depends on a "Can I badger a supermicro seller to customize this far" sense?


Build basis is a Supermicro 6048R-E1CR60L, sold normally as a complete system only, which is a top loading 60 disk chassis.

https://www.supermicro.com.tw/products/system/4U/6048/SSG-6048R-E1CR60L.cfm

2x Xeon E5-2637 v4 3.5GHz 4 core
2x 64GB LRDIMM (1 per CPU)
--- (allow LRDIMM buildout, rather than lock in with RDIMM)
add the optional NVMe U.2 drive cage (which connects to CPU1 OCulink ports, but doesn't that mean only 4x U.2 drives?)
add the SIOM network card (thinking the quad 10G SFP+ Intel X710 based one)

Here's where things get weird. Note the system name ends with E1. Supermicro naming system implies this uses a single path backplane. Looking at their separate backplane literature however for the 30 disk backplanes shows the name BPN-SAS3-946SEL1/EL2 which implies there is a proper EL2 dual path version of the backplane (illustrations seem to confirm).

https://www.supermicro.com.tw/manuals/other/BPN-SAS3-946SEL.pdf

So one potential configuration is the typical cascading backplane setup. The illustrations implies a single HBA dual path can be done with the prebuilt system's mezzanine SAS HBA's 2 connectors only if using the EL2 backplane. But the system only has a single mezzanine slot for an HBA, and the HBA itself doesn't seem stackable so I don't understand how they could go about doing full dual path for the prebuilt system. I guess they use 2 HBA's on other motherboards or SBB modules?

So the bright idea I had, is use the normal PCIe slots, and fit two Supermicro AOC-S3216L-L16iT SAS HBA's which each have 4 miniSAS connectors, to dual path to both backplanes each. Cables galore though, and not necessarily justified because total SAS bandwidth for HDD's suggests 60x HDD would be roughly 5 SAS channels (so round up to 8 channels, or 2 miniSAS connectors). I suppose just the mezzanine card and a lower spec 2 connector HBA in one PCIe slot would cover the cascading setup with full dual path.

https://www.supermicro.com.tw/products/accessories/addon/AOC-S3216L-L16iT.cfm

This all assumes I could badger the Supermicro reseller to fit 2 BPN-SAS3-946SEL2 backplanes into the system, otherwise no dual path options.

The other weird part of the setup is to use the NVMe U.2 drive cage to host some Optane drives for the ZIL (P4800X or 900P?, and wrap that up with a Squid PCIe adapter carrier board with the x16 connector hosting up to 4 M.2 drives for L2ARC using the last remaining conventional PCIe slot (Samsung 960 PRO M.2 perhaps?).

PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules - Amfeltec


HDD selection will probably be HGST 4TB SAS to increase drive count and cut down on cost. The chassis has an additional 2x2.5 SATA drive holders, which I would like to use for the OS ZFS syspool with suitable drives, or I suppose I could also do 2xSATADOM's but the 2.5 drive cage would get wasted.


Crazy, or just entirely dependent on my reseller relationship?


PS - How much of a pain is it to import an existing iSCSI zvol from NexentaStor to FreeNAS? Ideally I want to just yank the pool from the old server and import into the new one and copy/migrate directly there. I've read some stuff about applying ZFS user tags to store deviceID before export or else ESXi complains however. Am I setting myself up for a world of pain?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Crazy, or just entirely dependent on my reseller relationship?
It looks like it is down to how much the reseller is willing to work with you on it. Why are you trying to shove all that into one chassis?
You could go with a more conventional 4U or 3U server and put all the wiz-bang tings in it you want and cable that to a JBOD like this:

https://www.hgst.com/products/platforms/4U60G2-storage-platform

http://www.serversdirect.com/storage/das-and-jbod/hgst-4u60-jbod

Then you buy a base server that suits you, and the JBOD and cables, and put the accessories in it you want with no worries about what a vendor wants to do.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS - How much of a pain is it to import an existing iSCSI zvol from NexentaStor to FreeNAS? Ideally I want to just yank the pool from the old server and import into the new one and copy/migrate directly there. I've read some stuff about applying ZFS user tags to store deviceID before export or else ESXi complains however. Am I setting myself up for a world of pain?
Probably the best bet for this is having the two systems setup on a 10GB network together and do a ZFS send-receive to transfer a snapshot of the pool from one system to the other. Even if you go with the server you were thinking of, most vendors want to sell those with a minimum of half the drive bays populated. I don't know if NexentaStore does any funny (proprietary) things with their systems, but you might not be able to just move the drives over and import the pool.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I went and did some reading on the Nexenta site and it sounds like (from the little I read) that they are moving away from ZFS as their file system to some new distributed file system that they have developed internally. The blog post was written by one of their developers back in December of 2015, but I have not been keeping up with what they do, or don't do.

I also looked into what you are trying to do and I was thinking that you could get a 2u rack mount to use as the "head" for the storage. Something like this:
upload_2018-2-13_11-59-26.png

with a board like the X10 DRH-iT, with maybe 512 GB RAM, Dual Xeon and one ore more 12G SAS HBA controller with external SAS connectors according to the number of enclosures. You need a lot of RAM, maybe even max it out, and at least one 12G external 2 port SAS interface like the LSI/Avago 9300 8e.

Then for the drives, you could use the HGST enclosure I linked to earlier or you could go with something like this:
upload_2018-2-13_12-4-26.png

SuperChassis 946ED-R2KJBOD (90 disks JBOD, up to 900 TB raw per enclosure)
The documentation I was looking at may be dated, but it should work with almost any drive up to 10TB. It would probably even do 12TB drives, but they probably were not released when the document was written. I am sure it would work with the 4TB drives you were looking at and then you have room to add more drives if your needs increase.
 

Asteroza

Dabbler
Joined
Feb 12, 2018
Messages
14
It looks like it is down to how much the reseller is willing to work with you on it. Why are you trying to shove all that into one chassis?

Rack space reservation issue (4U only) means a direct old/new server swap. During data migration I could grit my teeth and put the old server on the floor if I need direct network linking.

I also get the impression the HGST JBOD is rebadged Quanta gear?

Nexenta is running after the object storage crowd, hence their NexentaEdge distributed file system, so while not forgotten, NexentaStor (their illumian based ZFS SAN OS) is slowly withering.

I saw that Supermicro does have a 4U 90xHDD server chassis based on the JBOD chassis (that motherboard rear overhang is serious!) but the PCIe slot situation isn't great and there's no NVMe U.2 cage. I suppose my early start would be a single pool, so I could shove an Optane M.2 and a L2ARC M.2 on one Squid card to cut down on PCIe slot use. Also, I am not anticipating needing to go that large in HDD count.

As far as migration, the current ashift=9 zpool appears to be a v5000 type, but as far as I understand, I am not using distro specific feature flags. So zfs send the zvols should be comparatively painless to a ashift=12 4K sector native pool on FreeNAS. Setting up the sent zvols as iSCSI targets again seems potentially difficult though. There doesn't seem to be a clear guide on walking through the prep/copy/configure regarding iSCSI zvols imported from other distros to FreeNAS.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Here's where things get weird. Note the system name ends with E1. Supermicro naming system implies this uses a single path backplane. Looking at their separate backplane literature however for the 30 disk backplanes shows the name BPN-SAS3-946SEL1/EL2 which implies there is a proper EL2 dual path version of the backplane (illustrations seem to confirm).
Yup, backplanes are available in the following formats:
  • SATA-style connectors (TQ)
  • SFF-8086 or 8643 connectors
  • Single expander (EL1)
  • Dual expander (EL2)
As far as migration, the current ashift=9 zpool appears to be a v5000 type, but as far as I understand, I am not using distro specific feature flags. So zfs send the zvols should be comparatively painless to a ashift=12 4K sector native pool on FreeNAS. Setting up the sent zvols as iSCSI targets again seems potentially difficult though.
If you're using zfs send/recv you can pretty much always transfer data to another ZFS pool by using the default stream. Which you'll probably have to do, to get rid of the ashift. You could probably even do it between Open and Oracle ZFS.

Setting up the sent zvols as iSCSI targets again seems potentially difficult though. There doesn't seem to be a clear guide on walking through the prep/copy/configure regarding iSCSI zvols imported from other distros to FreeNAS.
If you want your clients to keep working without being reconfigured, I imagine you'll have to setup iSCSI with the same identifiers as before. Not exactly sure what that involves.
 
Status
Not open for further replies.
Top