Installing JBOD

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
Long story short: 2 years ago I wanted to build mainly media storage (and play with ZFS) and needed high capacity drives; but I was (I'm) limited by 1U factor and therefore had to use 2,5" drives. Initially wanted to use SSDs but price was way too high. As I had a chance to buy cheap and shitty ST5000LM00, I did it having a plan to replace them over a time (hoping for prices getting lower) by SSDs.
Since prices don't want to go south, I need to change my approach. In the meantime I changed the rack and I'm able now to add some more gear. So I've been thinking about putting in JBOD with 6x 6TB WD Red Plus and possibly 2x M.2 NVME.
Could you please advice on following:
- 2U JBOD model for 6x HDDs and ideally with 2x M.2 or PCI slot, equipped with sas interface
- compatible sas hba card to install in supermicro X11SCH-LN4F
- how noisy is WD Red Plus?
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
JBOD are enclosures for SATA/SAS devices only, they have neither PCIe nor M.2 slots.
  • a JBOD with a single-expander backplane accepts SATA or SAS hard disks
  • a JBOD with a dual expander backplane accepts only SAS harddisks
The best choice for most of users is a JBOD with a single-expander backplane, it is also cheaper.

If you have enough money, you can buy a SAS3/SATA Supermicro JBOD, like this (I have two of them, but with 4U JBOD with 44 disks):
They work very well with TrueNAS 12.0 and you can even use the location LED with sas3ircu.

If you lack of money, you may try a SAS2/SATA JBOD because you will see no difference between SAS2 and SAS3 when you have only spinning disks. In this case, you may need:
Many of them are available on the pre-owned market, but I cannot give you the exact reference to buy, because I do not own myself such a system.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
a JBOD with a dual expander backplane accepts only SAS harddisks

Okay, sorry, but I've got to call that out... this isn't true.

It's true that only the primary SAS will be usable for talking to the SATA disks, but that is not the same thing as "accepts only SAS disks".
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Okay, sorry, but I've got to call that out... this isn't true.

It's true that only the primary SAS will be usable for talking to the SATA disks, but that is not the same thing as "accepts only SAS disks".
Thank you for the precision, it is worth knowing that detail
 

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
Thanks guys. I will have a closer look at above hardware. When it comes to cost, the solution must be significantly cheaper than set of 5TB SSDs :smile:. Otherwise it would be just overcomplicating the whole setup...
Those HBA cards, do they need to be in IT mode?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, 4TB SSD's ... not really being aware of any reasonably-priced 5TB SSD's ... like the 4TB 870 Evo are around $500. Prices went up last year due in least at part to Chia crypto mining.

Here in the US, we just had Black Friday where we saw shuckable 12TB and 14TB HDD's available for $199.


HDD remains cheaper than SSD by a good bit, and with ZFS, if you have lots of RAM and some L2ARC, HDD can be made to perform fairly well by maintaining low pool occupancy rates, at least for some common workloads.

You should always use LSI HBA's in IT mode. The explanation is contained in this article:


The best bang-for-buck is probably the LSI 2308 based ones (PCIe 3, can be had for $30 used) but the LSI 3008 based ones are coming down in price too (sub-$100 used) and may be a bit more future-proof. Be aware that not only do they need to be in IT mode, but also with the correct firmware version.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Keep in mind mixing SAS & NVMe drives is going to land you squarely in the "new / current shipping" category, and this is technology somewhat in flux. If you need a couple NVMe drives, just add PCIe x4 cards and keep them separate. Consider: Most of the controllers are going to be PCIe x8 themselves.
 

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
When it comes to NVMe, I would need to remove NVMe PCI ext. card from the server in order to install sas hba card. My setup is far from any professional approach since I have esxi installed on bare metal with AHCI and some PCI NVMe SSD controller passedthrough to TrueNAS VM. That's why I hoped for getting that NVMe installed in some sort of JBOD. In any scenario, I would need to passthrough sas/sata card (as PCI device) to VM.
Yeah, I was a bit naive counting on SSDs prices drop. It won't happen soon and Black Friday in Europe isn't the same thing as is in US.
As this would be my first approach to JBOD, are you aware of proper tutorials? Or maybe it's straight forward thing like: get sas/sata hba card, flash it with proper IT firmware, install HDDs in enclosure plus connect the whole thing to server? I have a fairly large amount of RAM (64GB) dedicated for TrueNAS VM so, there's a chance to speed up ZFS with proper HDDs.
One more, if I succeed with above, will I be able to migrate pool from current drives to JBOD drives?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Ok, it sounds like you've created a somewhat complicated set up.

Here's the thing I was trying to point out: NVMe drives directly connect to PCIe lanes. Most of the current stuff connects at PCIe x4. Moving those away from the CPU to another box is going to be all kinds of problems. When you drop the enterprise NVMe drives that have the SAS compatible plug in a SAS expander they pretend they're SAS drives unless you have one of the very new universal backplanes & controllers. Even if you do, they won't connect at anything like the PCI x4 speed you might be expecting. They will be quite hamstrung in terms of bandwidth. External box != NVMe unless you have $$$$.

That's all I wanted to point out, I can't help you with slots & controller passthru, etc...
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Flashing a LSI HBA with the proper firmware is straightforward
  • you create a USB key with FreeDOS
  • according to your card, you copy sas2flash or sas3flash to the USB key
  • you copy the firmware to USB key
  • you boot on the USB key and you run the command to flash the HBA
if you are not familiar with FreeDOS, you can also put the HBA in a Windows computer and flash the HBA from this Windows computer,
it is easier but you have to dismount/remount the HBA each time your upgrade the firmware.

But according to your setup, with ESXi and TrueNAS, if I were you, I will not add a JBOD but a pre-owned 2U or 4U supermicro server with a X9 or X10 motherboard, install TrueNAS Core on it, and then connect it to ESXi through the network.
It would be really easier to manage and it would free some memory for others VM.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
To clarify a point, their are tri-mode HBAs that support SATA, SAS & NVMe. Same vendor, LSI / Broadcom, (formerly Avago).
Here is a link;

What this does is allow a JBOD to include disk slots of the type U.2;

If you have a M.2 NVMe drive, their are adapter boxes to make it a 2.5" form factor U.2 drive. From what I can tell, only the Enterprise uses U.2, so they are out of this world expensive. Even a 2.5" disk chassis with M.2 to U.2 adapter box is not cheap.

But, you CAN get NVMe external through a SAS type connector.


Would I recommend it for someone? NO.
Basically if you did not know how to implement such, then it's outside your scope / league / ability to make reliable.

Would I do so for my self? Maybe...
 

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
Like I said, the main goal is to get JBOD enclosure connected to server significantly cheaper than replacing 2.5" HDDs by SSDs. Otherwise whole endeavor doesn't make sense. I know what I want to achieve and how to do it (more or less :cool: ) but at first, I need to be sure that hardware I'm going to purchase is fit for purpose and fully compatible, and all comes down to details. That's why I keep asking questions here... And I'm grateful for answers.
Mobo I have, is equipped with 2 PCI slots. As HBA 9500-8e it's pretty expensive and to avoid performance issues, I think I could use X16 for HBA card and X8 slot for low profile NVMe adapter. Available space could be an issue.

Zrzut ekranu 2021-12-10 o 20.54.51.png

So, as per advice I think I'll try to get:
- LSI-9300-8e
- 2U JBOD SAS3
- low profile PCI NVMe adapter
 

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
What's the best way to migrate pool between drives being within the same system (TrueNAS)? zfs send/receive?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
What's the best way to migrate pool between drives being within the same system (TrueNAS)? zfs send/receive?

Zfs send/receive migrates datasets within or between pools, both on the same system and between systems. So yes, you can use it, but you mentioned "migrating pool between drives", which is another thing.

You can replace drives in a pool and migrate the pool to an entirely different set of disks, but you have to accept some limitations. The disks have to be the same size or larger that the one's being replaced, and you can't change the pool geometry. No converting RAIDz1 to a mirror, etc... With those limitations, you configure the additional drives, and then replace each component of the pool one by one and let it silver in. Your pool will appear to stay in place, and even remain usable, and simply shift to the new disks. When the process is complete the old drives will be available for another use, or can be removed.
 

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
Thanks, I'm familiar with single disk replacement and resilvering. But it's time consuming and a bit of pain in the ..s when it comes to getting access to server; especially times 6.
I don't want to change anything in the pool, simply migrate it as it is. So, zfs send/receive make a copy of pool's content and doesn't move pool itself...
zpool export/import, lets to import already existing pools on disks to TrueNAS, thus it's useless for that purpose.

What about creating a new pool on set of new disks, setting recursive replication task and once is done, export an old pool to remove it from system? This action will stop all services and jails pointing to locations in old pool, correct? If I rename new pool's name to the old pool's name, would I be able to resume services and jails?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

rvassar

Guru
Joined
May 2, 2018
Messages
972
Thanks, I'm familiar with single disk replacement and resilvering. But it's time consuming and a bit of pain in the ..s when it comes to getting access to server; especially times 6.

It shouldn't be too slow unless you're using SMR disks, which write saturate. If you replace the disks while they're still online it should be perhaps several hours each. You need an extra SATA port however.

I don't want to change anything in the pool, simply migrate it as it is. So, zfs send/receive make a copy of pool's content and doesn't move pool itself...
zpool export/import, lets to import already existing pools on disks to TrueNAS, thus it's useless for that purpose.

What about creating a new pool on set of new disks, setting recursive replication task and once is done, export an old pool to remove it from system? This action will stop all services and jails pointing to locations in old pool, correct? If I rename new pool's name to the old pool's name, would I be able to resume services and jails?

Zfs send/receive can get datasets moved between pools, and it's built into the UI. You don't need to go cli to invoke it. Jails & plug-ins are going to be a special case, and there's additional steps.
 

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
Ok, so to avoid exporting jails and issues with services I could assemble (with all new disks in) and connect JBOD (via sas hba card) to the server and then, 1 by 1, set old disk offline and replace by new one in webgui?
WD Red is SMR, is Red Plus CMR?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Ok, so to avoid exporting jails and issues with services I could assemble (with all new disks in) and connect JBOD (via sas hba card) to the server and then, 1 by 1, set old disk offline and replace by new one in webgui?
You don't want to set them offline, just replace them. That way the pool isn't degraded while the new device silvers in. If something else goes wrong, your data is safe. Also... ZFS doesn't care where the drives are connected. MB SATA vs. HBA in ext-JBOD... It marks the disks up with a UUID like identifier and will simply find the drives and stitch the pool together. You can re-cable all you want...
WD Red is SMR, is Red Plus CMR?

I don't have a list of which is SMR & CMR these days, and I'm kind of shunning WD for imposing that whole sideshow on us... Fairness disclosure: I'm ex-HGST.
 

listhor

Contributor
Joined
Mar 2, 2020
Messages
133
You don't want to set them offline, just replace them. That way the pool isn't degraded while the new device silvers in. If something else goes wrong, your data is safe. Also... ZFS doesn't care where the drives are connected. MB SATA vs. HBA in ext-JBOD... It marks the disks up with a UUID like identifier and will simply find the drives and stitch the pool together. You can re-cable all you want...


I don't have a list of which is SMR & CMR these days, and I'm kind of shunning WD for imposing that whole sideshow on us... Fairness disclosure: I'm ex-HGST.
Just to do not miss anything; If I would have selected directly "replace" option, pool will be using old disk until is completely replicated on new one and then instantly switched to new disk while removing old from the pool?
I found Red Plus brief, and these HDDs are CMR disks, and seems like 8TB is a quite good option to choose. BTW, how does disk's cache work with ZFS cache, I mean is ZFS aware of disks cache?

WD Red Plus spec.png

WD Red Plus spec II.png
 
Last edited:
Top