Disk shelves/JBODs and controllers.

SCSPhil

Cadet
Joined
Aug 27, 2016
Messages
8
Ok, so after a recent server fault, I'm in the market for a new primary host. But selecting said host from surplus gear would be a lot easier if I was used a SAS JBOD/Disk shelf, but I'm a bit confused on what I need to do so.
I currently run an LSI 9211-8i. Is there a way to use this card on an external array? If not, what's an easy way to determine which LSI controllers would be bad or good for this?
What about using it with a NetApp DS4246 with OM6 modules? These can be had on ebay for relatively cheap. It would seem that uses QSFP connectors, but QSFP SAS to SFF-8088 seems purpose built for this, what about SFF-8087 (I think that's the internal one)?
Thanks in advance.
 
Joined
Dec 29, 2014
Messages
1,135
Anything with an "8e" in the card type would work for this. I can't speak to the NetApp shelves, but I can tell you from personal experience that an LSI 9207-8e works great with and HP D2600 (12 x 3.5) and D2700 (25 x 2.5) 6G shelves.

Edit: D2700 is 25 x 2.5, NOT 24 X 2.5. Long day... :-(
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
NetApp DS4246 with OM6 modules?
Too much trouble. There are other better options even if the NetApp units are cheap. How many drives are you looking to run?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Anything with an "8e" in the card type would work for this. I can't speak to the NetApp shelves, but I can tell you from personal experience that an LSI 9207-8e works great with and HP D2600 (12 x 3.5) and D2700 (24 x 2.5) 6G shelves.
https://www.ebay.com/itm/HP-AJ940A-StorageWorks-D2600-Disk-Enclosure-AJ940-63002-12x-2TB-SAS-HDDs-24TB/303061540367
That is a fair looking option. It looks like pickings are slim right now and prices are higher than usual. I got a 24 bay unit a couple years ago for only $150 and shipping.

Edit: I might be mistaken... This appears to include 12 x 2TB drives...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

SCSPhil

Cadet
Joined
Aug 27, 2016
Messages
8
Too much trouble. There are other better options even if the NetApp units are cheap. How many drives are you looking to run?
Why is the NetApp too much trouble? Seen videos of people using them just fine, but I've never attempted it so is there a hidden pitfall I'm not aware of?
I'm looking to have at least 12 bays in a shelf. 24 would be a nice option. There are NetApp 24 bay shelves for 300-400 which is why I was looking at them, but upon closer inspection it seems most of them are either missing PSU's, or caddies.

The dell you linked seems like a much more complete solution. Sure it's less bays, but comes with all its PSU's, modules, and caddies. And these can all be daisy chained if needed, right?

The rest of your links are super useful and tell me what I need to know. Thanks. :) I could convert the card I have, but for that price I could probably just buy a whole new card.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Why is the NetApp too much trouble?
I mainly say that because you need a special cable that is more expensive and there have been few people that had compatibility / speed problems with some of them. There are plenty of other options that are just plain easier to work with. I know they can be successfully used but they do present some additional challenges.
The dell you linked seems like a much more complete solution. Sure it's less bays, but comes with all its PSU's, modules, and caddies. And these can all be daisy chained if needed, right?
Sure. I have a system at work that has four 16 bay disk shelves attached and the documentation says it can support four more, but we will probably buy new hardware before that happens. You don't need to use both interface modules on the back of the disk enclosure unless you are using two SAS controllers in the server and there is no speed advantage to that, it is for fault tolerance.
 
Joined
Dec 29, 2014
Messages
1,135
Edit: I might be mistaken... This appears to include 12 x 2TB drives...
No, you are reading that correctly. The only other thing you would need are a pair of mini-SAS to mini-SAS cables to connect to the 9207-8e (or like).
 

SCSPhil

Cadet
Joined
Aug 27, 2016
Messages
8
Thanks for all your help. Got a 1200 MD, and an 8e card (had to wait a few days to order because my existing server is kinda failing but needed to know if I could extend its life in a weird way, long story). Once I get this working, replacing hosts will be muuuuuuuuuuuch cheaper.
Now just to figure out what rail kit to buy for the shelf...
 

dich

Cadet
Joined
Aug 7, 2017
Messages
8
Hope I can be helped here as the thread seems relevant - did anyone ever tried to daisy-chain DELL JBODs (Powervault MD3xxx) with SuperMicro JBODs?
Thanks for any hints!
 

Evi Vanoost

Explorer
Joined
Aug 4, 2016
Messages
91
Hope I can be helped here as the thread seems relevant - did anyone ever tried to daisy-chain DELL JBODs (Powervault MD3xxx) with SuperMicro JBODs?
Thanks for any hints!
You can, I've had several brand daisy chained. You just got to make sure that you're actually running in a JBOD and not some weird embedded RAID controller and make sure your wires upstream/downstream are correct which can take some plugging in and out if they're not clearly marked.

In the end, most of them are just using branded LSI expanders.
 

dich

Cadet
Joined
Aug 7, 2017
Messages
8
You can, I've had several brand daisy chained. You just got to make sure that you're actually running in a JBOD and not some weird embedded RAID controller and make sure your wires upstream/downstream are correct which can take some plugging in and out if they're not clearly marked.

In the end, most of them are just using branded LSI expanders.

Thanks Evi, meanwhile I found out that the Powervaults aren't "just" JBODs but really, as you said, have the RAID controller embedded and customized (branded) which also restricts the type of other JBODs that can be daisy-chained to it. In our setup, in fact, the host only had a HBA, all RAID functionality was delegated to the Powervault. We had to switch to another system and use zfsonlinux.

Thanks again!
 
Top