LSI SAS 3008 8 port HBA to 24 port backplane - saturation?

Status
Not open for further replies.

beezel

Dabbler
Joined
Nov 7, 2016
Messages
33
I'm building a supermicro box from the 6048R-E1CR24L (https://www.supermicro.com/products/system/4u/6048/ssg-6048r-e1cr24l.cfm). It uses a SAS3 12gb/s 8 port HBA with 2 (I think) miniSAS-HD connections to a BPN-SAS3-846EL1 backplane (https://www.supermicro.com/manuals/other/BPN-SAS3-846EL.pdf) (24 ports).

I am planning on using this primarily for platters (6.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Hitachi Ultrastar 7K6000 (512e) to be exact). Will we be saturating either the HBA or the links if we break out the one 8 port HBA into 24 devices? I know there is napkin-math to be done that can show if it'll be close or not - I am just not sure how to do it.

Any ideas if we'll be overburdening this setup? We will primarily be running large vms (mail archival, file servers, etc) and will have PCIe NVMe SSDs (Intel p3520) for ZIL and L2ARC.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
No, you can't get enough transfer speed from the drives to come close to saturation on the SAS controller.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

beezel

Dabbler
Joined
Nov 7, 2016
Messages
33
No, you can't get enough transfer speed from the drives to come close to saturation on the SAS controller.

Sent from my SAMSUNG-SGH-I537 using Tapatalk

Thanks Chris, that is kind of what I suspected. Any idea on how to formulate the math to figure it out? We're might be tossing in some Samsung 850 pros in there also since we don't completely need 24 disks of storage. Now I'm wondering how much headroom I might have leftover and how to do the math to figure out a balance.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
SAS3 provides 12 Gb/s per channel. You can connect to the expander with either four or eight, for 48 Gb/s or 96 Gb/s. Consider 1.5 Gb/s for HDDs (yes, barely capable of saturating SATA 1.5Gb/s) and 6 Gb/s for SSDs.

Don't forget to use proper 2.5" spacers so as to not destroy airflow around your HDDs.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The fastest mechanical drive I have read specs for has a peak transfer rate of 230MB/s and if I recall correctly, the Samsung 850 tops out at around 530MB/s. There is some loss to overhead, but you could connect 16 SSDs before you would get close to, "filling the pipe" but you need to have some place to send all that data before it matters. What kind of network interface are you thinking about using to connect this server to a network so someone can take advantage of all that glorious speed?
 

beezel

Dabbler
Joined
Nov 7, 2016
Messages
33
The fastest mechanical drive I have read specs for has a peak transfer rate of 230MB/s and if I recall correctly, the Samsung 850 tops out at around 530MB/s. There is some loss to overhead, but you could connect 16 SSDs before you would get close to, "filling the pipe" but you need to have some place to send all that data before it matters. What kind of network interface are you thinking about using to connect this server to a network so someone can take advantage of all that glorious speed?

Thanks, that really helps. We're using 10gb for either NFS or iSCSI, not exactly sure which way to go yet. Traditionally we've been iSCSI over multiple 1gbE connections - we used iSCSI for the multipathing and round robin help. I'd really like to be able to use some more native freenas features though, like snapshoting. NFS over a single 10gb might be better. Any thoughts on that while we're talking about it?
 
Status
Not open for further replies.
Top