Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.
Resource icon

Don't be afraid to be SAS-sy ... a primer on basic SAS and SATA

Status
Not open for further replies.

Stilez

FreeNAS Experienced
Joined
Apr 8, 2016
Messages
256
Important point to add. Get good SAS cables (Adaptec, some Dell).

You'd think that if a SATA drive works on a SAS card + cable, then a SAS drive will. Not necessarily so. I got a cheap(ish) 45cm SAS cable for my 9211-8i off Amazon. It worked fine with any HBA and any SATA drive. The SAS drive I got it for, wouldn't spin up on any system. Normally I would have sworn that showed it was a dud drive. It wasn't. The cable was fine for SATA but not good for SAS. (I don't understand how that can be, either!)

Worth noting for people new to SAS.

CONSIDER YOUR CABLES AS WELL!
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,552
Important point to add. Get good SAS cables (Adaptec, some Dell).

You'd think that if a SATA drive works on a SAS card + cable, then a SAS drive will. Not necessarily so. I got a cheap(ish) 45cm SAS cable for my 9211-8i off Amazon. It worked fine with any HBA and any SATA drive. The SAS drive I got it for, wouldn't spin up on any system. Normally I would have sworn that showed it was a dud drive. It wasn't. The cable was fine for SATA but not good for SAS. (I don't understand how that can be, either!)

Worth noting for people new to SAS.

CONSIDER YOUR CABLES AS WELL!
The connection is physically different. A SAS drive has additional pins for data. I can see why it would not work on the SATA breakout cable.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Stilez

FreeNAS Experienced
Joined
Apr 8, 2016
Messages
256
The connection is physically different. A SAS drive has additional pins for data. I can see why it would not work on the SATA breakout cable.
You may have misunderstood. It was the SATA drives on a SAS breakout cable that worked, while the SAS drive on the same SAS cable didn't - and this replicated across different SAS HBAs and different systems. in each case the SAS drive wouldn't work on any breakout or any system, but any SATA drive on any breakout and any port would work. Usual interpretation - dud SAS drive. Actual fault - dud cable. Why the dud SAS cable still worked with SATA drives - no idea.
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,552

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,476

mka

FreeNAS Experienced
Joined
Sep 26, 2013
Messages
107
I'm looking for an Mini SAS / SF-8643 to 4xSATA cable with a somewhat daisy chain like desgin with a common cable conduct and every SATA port / Cable be a few cm longer. Do such cables exist? It would result in much easier installation in my case.
 

Stux

FreeNAS Wizard
Joined
Jun 2, 2016
Messages
4,166
I'm looking for an Mini SAS / SF-8643 to 4xSATA cable with a somewhat daisy chain like desgin with a common cable conduct and every SATA port / Cable be a few cm longer. Do such cables exist? It would result in much easier installation in my case.
1) You won't find one with common conductors, since each lane has to have its own pair of conductors
2) I can imagine that one with different length conductors for each port is possible, but I've never seen one.
 

mka

FreeNAS Experienced
Joined
Sep 26, 2013
Messages
107
1) of course. I meant a tube around the internal wires
2) too bad...
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,552
I'm looking for an Mini SAS / SF-8643 to 4xSATA cable with a somewhat daisy chain like desgin with a common cable conduct and every SATA port / Cable be a few cm longer. Do such cables exist? It would result in much easier installation in my case.
I have never seen such except in mass produced systems where they are making thousands that are all the same and they know ahead of time exactly how long they need to be. The more generic type that can be purchased have all the breakout cables the same length because they have no idea how long you need each cable to be.
 

rslocalhost

Newbie
Joined
Jan 1, 2018
Messages
23
Something you may want to update in the top post is putting a note in about the power on disable feature. I have thread where Chris helped me solve it, but basically, newer SAS drives may not spin up if you use a sata plug to power them (through an adapter cable for me). You have to use a molex to sata adapter, or get an adapter cable that has molex plugs like leoj3n posted about earlier. Here's a link to the thread.
 

sfcredfox

FreeNAS Experienced
Joined
Aug 26, 2014
Messages
318
Since PCIe 3.0 x8 is limited at 7880MBps, you could hook up 36 drives to a single 8port controller without noticing performance issues to any of those drives.
Can anyone expand on this for enterprise SSDs? They presumably get a higher throughput than 125-150, so I'm trying to get a realistic idea of how many SSD's you could throw on a SAS expander before your tap out the SAS connection?

Edit:
I'm having a hard time deciding what numbers to use to base any math on. Some places suggest you can get 2Gbps of throughput out of 6G SSDs vs 1.25Gbps on platter, whats the right way to base the math?

Also, I guess the question is based on getting the most throughput without over subscribing the path.
If you wanted low latency, high IOPS config, it wouldn't matter as much if you oversubscribe the SAS line right? You could even put 6G SSDs in a 3G enclosure I guess doing that. Would just suck when you want sequential transfers.
 
Last edited:

Johnnie Black

FreeNAS Guru
Joined
May 10, 2017
Messages
749
how many SSD's you could throw on a SAS expander
Depends if it's a SAS2 or SAS3 expander with SAS2 or SAS3 SSDs:

SAS3 expander with SAS3 devices will have max bandwidth of 4800MB/s with single link, double that with dual link.
SAS2 expander or SAS3 expander with SAS2 devices will max at 2400MB/s per link.

Those are the max theoretical values, there's always some protocol overhead, in this case should be around 8%

You'll then need to consider the HBA bandwidth e.g., PCIe 2.0 or PCIe 3.0.
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,552
Also, I guess the question is based on getting the most throughput without over subscribing the path.
If you wanted low latency, high IOPS config, it wouldn't matter as much if you oversubscribe the SAS line right? You could even put 6G SSDs in a 3G enclosure I guess doing that. Would just suck when you want sequential transfers.
It might be a good idea for you to start a build thread where you give some indication of what you are trying to accomplish. If you give a good title and problem description, you might find that someone else has already fought the same battle and devised a strategy.
 

sfcredfox

FreeNAS Experienced
Joined
Aug 26, 2014
Messages
318
You'll then need to consider the HBA bandwidth e.g., PCIe 2.0 or PCIe 3.0
In this case, let's assume PCI 3.0 (7880 possible)

SAS2 expander or SAS3 expander with SAS2 devices will max at 2400MB/s per link
So how would you divide that between the drives? 2400 MB/s possible devided between how many drives @ a certain transfer rate? What would you use as a typical MB/s for SSD?

Edit:
It might be a good idea for you to start a build thread where you give some indication of what you are trying to accomplish. If you give a good title and problem description, you might find that someone else has already fought the same battle and devised a strategy.
My system use is very complicated, so I'm just trying to learn how to properly do the math, and what excepted rates are. I keep reading articles that mention wildly different accepted transfer rates for SSDs
 
Last edited:

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,552
In this case, let's assume PCI 3.0 (7880 possible)
You would over-saturate the SAS controller before you over-saturate the PCIe bus unless you are using a 16 port SAS controller.
So how would you divide that between the drives? 2400 MB/s possible devided between how many drives @ a certain transfer rate? What would you use as a typical MB/s for SSD?
If you don't account for overhead, it would only allow you to connect 8 drives at 6GB/s speed, but there is always overhead, so you can potentially connect twice that many drives but 24 would be the maximum and you would certainly fill the pipe. The drives would have some idle time waiting for the SAS controller to move the data, but not much.

The SAS controller would be the choke point, not the PCIe bus.

https://www.supermicro.com/products/system/2U/2028/SSG-2028R-E1CR48N.cfm
 

sfcredfox

FreeNAS Experienced
Joined
Aug 26, 2014
Messages
318
over-saturate the SAS controller before you over-saturate the PCIe bus unless you are using a 16 port SAS controller
(Im referring to a SAS9207-8e in this example, dual port 6G HBA)
And that's because the 4 lanes only supports up to 4800 MB/s, the only way to bring the PCI bus into the mix is maxing out both ports @ 4800MB/s right?
Edit: I guess you already answered that above by mentioning it would need to be a 16 port, not an 8 port.

What are you using as your SSD drive speeds? Is that typical or accepted as the norm?
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,552
My system use is very complicated, so I'm just trying to learn how to properly do the math, and what excepted rates are. I keep reading articles that mention wildly different accepted transfer rates for SSDs
The differences are probably down to different hardware models that the various writers are using because each has their own specs. Then there are the things that can only be estimated like how much overhead (unknowable factors) there are that will keep a drive from performing at the speed the manufacturer said it would.
What are you using as your SSD drive speeds? Is that typical or accepted as the norm?
You would need to look at a particular drive and see what the manufacturer says it will do. I usually figure it will really do between 50 and 75% of that. The manufacturer specs are always under ideal conditions so they can get the best numbers. Real world is always less.
So, if they say it will do 550MB/s, you probably want to figure it will really do around 225MB/s. It is always better to have extra than not enough.
If you use 24 SSDs and use both ports of that controller to link to the expander backplane, I would expect the SAS controller to be completely busy moving data and the drives might have a little idle time. Less drives and you might see the SAS controller have the idle time while the drives are completely busy. It is all depending on the drives you are looking at and what the real world performance of them is under the workload you throw at them.
Now, you will also need a network interface that can support that kind of data rate.
 
Last edited:

Johnnie Black

FreeNAS Guru
Joined
May 10, 2017
Messages
749
So how would you divide that between the drives? 2400 MB/s possible divided between how many drives @ a certain transfer rate? What would you use as a typical MB/s for SSD?
Discounting the overhead will have around 2200MB/s with single link, so divide that (or 4400MB/s is using dual link) by the number of SSDs you want.
 

ere109

FreeNAS Experienced
Joined
Aug 22, 2017
Messages
151
Thank you very much, @jgreco . I've been playing with FreeNAS for a year, now, have read this page twice before, and it was lost on me. I just ordered a new Super Chassis with a TQ backplane, and this helped clarify a lot of pieces that have been floating around. Looks like I'll need an expander to round out my system, and now I know.
 

Octopuss

FreeNAS Experienced
Joined
Jan 4, 2019
Messages
190
I got a LSI 9217 card and without even any drives plugged in, the heatsink is so hot I almost burned my finger.
Is this normal? Are the chips supposed to be this hot?
 
Status
Not open for further replies.
Top