Resource icon

Don't be afraid to be SAS-sy ... a primer on basic SAS and SATA

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
With the introduction of SAS 12Gbps, seems like "it's time" to do a braindump on SAS.

Work in progress, as usual.

History

By the late '90's, SCSI and PATA were the dominant technologies to attach disks. Both were parallel bus multiple drop topologies and this kind of sucked. SATA and Serial Attached SCSI (SAS) evolved from those, using a serial bus and hub-and-spoke design.

Early SATA/150 and SATA/300 were a bit rough and had some issues, as did SAS 3Gbps. You probably want to avoid older controllers, cabling, expanders, etc. that doesn't support 6Gbps because some of it has "gotchas" in it. In particular a lot of it has 2TB size limitations. Most 3Gbps hard drives are fine though.

Similarities, Differences, Interoperability

SAS and SATA operate at the same link speeds and use similar cabling. SAS normally operates at a higher voltage than SATA and can run over longer cabling.

SAS and SATA use different connectors on the drive. The SATA drive connector has a gap between the signal and power sections, which allows separate power and data cables to be easily connected. The SAS drive connector does not have a gap, and instead has a second set of pins on top. This second set of pins is the second (redundant) SAS port. There are pictures of the top and the bottom of the drive connector.

SATA drives can be attached to a SAS port. Electrically, the SAS port is designed to allow attachment of a SATA drive, and will automatically run at SATA-appropriate voltages. Physically, the SAS backplane connector has an area that will allow either the gapless SAS or the gapped SATA connector to fit. See picture of SAS backplane socket.

SAS drives are incompatible with SATA ports, however, and a SATA connector will not attach to an SAS drive. Don't try. The gap is there to block a SAS drive from being connected to typical SATA cabling, or to a SATA backplane socket.

When a SATA drive is attached to a SAS port, it is operated in a special mode using the Serial ATA Tunneling Protocol (STP).

SATA drives are inherently single-ported, meaning that they can only be attached to one thing at a time. SAS devices, however, are usually dual-ported. This means that, electrically, there are two ports on the single SAS connector. One is the primary and one is the secondary. The secondary port may be supported by a backplane or enclosure to allow the attachment of a second host, or to allow multiple paths back to a host for a high-availability configuration.

Some people use a special device called an interposer to take an inexpensive SATA drive and make it look like a nearline SAS drive (usually to get multipathing). Don't do this. They're crummy, just another thing to break.

The primary takeaway: You can connect SATA drives to an SAS port and it is expected to work. You cannot connect SAS drives to a SATA port. That absolutely won't work.

Cabling

As already noted, single SAS internal cables are virtually identical to SATA. The difference is that they can be longer. SATA is limited to 1 meter. It is therefore best to use cables less than 1 meter long if at all possible.

However, most SAS deployments involve larger numbers of disks, and SAS has some special connectors used to reduce wiring and aggregate lanes together.

For SAS 6Gbps, this is often the SFF8087 (internal, "Mini SAS") or SFF8088 (external). Four lanes gives you a total capacity of 24Gbps over a single SFF8087 connector. Some newer boards are using SFF8643 ("Mini SAS HD") for SAS 6Gbps.

For SAS 12Gbps, this is the SFF8643 (internal, "Mini SAS HD") and SFF8644 (external) connector. Again, four lanes gives you 48Gbps over a single SFF8643 connector.

A multilane connector may be broken into its four individual lanes using a breakout cable. For example, if you get an SAS HBA, it probably comes with one or two SFF8087's on it, but you may want to directly attach hard drives. A breakout cable allows this. This is a SFF8087-to-single-SAS breakout cable; this is a SFF8643-to-single-SAS breakout cable.

Also, in some scenarios, a mainboard may offer discrete SAS ports which you desire to aggregate into a multilane cable, and so reverse-breakout cables are available as well.

Internal connectors can be transformed into external connectors using an adapter plate. This allows you to create servers using storage in more than one chassis. This is "not for beginners" but the concepts aren't hard.

It is possible to mix 6Gbps and 12Gbps SAS. Just as with SATA, significant effort has been put into backwards compatibility.

SAS Expanders

A SAS expander essentially takes a SAS multilane connection and allows the attachment of additional SAS devices. These devices all share in the available bandwidth of the SAS multilane connection. SAS expanders can be cascaded as well. In the following picture:
4_catc_1.gif
we see three SAS expanders. The first one only distributes to the second and third. The second and third each attach to hard disks. Modern expanders typically have enough channels that you wouldn't need to cascade them for just this small number of disks. A typical modern expander might have 36 lanes, allowing 24 disks, two upstream four lane host connections, and a downstream four lane connection to another expander.

There are advantages and disadvantages to expanders. A primary advantage is cabling simplicity: if you have a 24 drive chassis with a backplane that uses an expander, you need only a single SFF8087 to attach from the backplane to the HBA. The two main downsides are that those 24 drives then share the 24Gbps that's available on a SFF8087, and that in some cases some specific SATA disks have been known to not play nicely and have caused problems for other attached devices on a SAS expander.

As a matter of throughput, a typical modern hard drive can push 125-150MBytes/sec (that's about 1-1.25Gbps) so if you load up 24 disks * 1.25Gbps, you do exceed the 24Gbps that the multilane is capable of. This, however, assumes that you are doing sequential access to all drives simultaneously. That is unlikely at best.

The picture changes for SSD, and expanders may not be a good idea for use with large numbers of SSD's if you are expecting high throughput.

SAS expanders can come pre-installed on a backplane, or can be purchased as separate devices. The separate devices often come on what appears to be a PCIe card, but this is only to take advantage of mainboard power. An expander such as the Intel RES2SV240 may be attached anywhere convenient inside a chassis and powered via a Molex power plug. If you have a free PCIe slot, of course, that is a great place to put it too.

Supermicro backplanes (TQ, A, BE16, BE26)

Supermicro offers backplanes for many of their chassis in a variety of configurations.

The TQ option brings each individual bay out to an individual SAS connector. This is straightforward and nonthreatening to those who are unfamiliar with multilane. However, it is a bad idea to have twenty four individual cables to have to dig through if you suspect a bad cable, etc.

The A option is the best generalized option. It is the same as the TQ except that it brings groups of four bays out to a single SFF8087. The SFF8087 is a latching connector and is therefore substantially safer than the individual cables in the TQ. For a 24 drive chassis, then, there will be six SFF8087 connectors on the backplane. You must connect all of them to something, or the corresponding bays will be dead. You can attach them to three eight-port HBA's (such as three IBM ServeRAID M1015's) and this is a high performance configuration that allows full 6Gbps on all slots. You could also attach them to an SAS expander, but if so, why not just buy a backplane with an expander?

The BE16 (or 12Gbps BE1C) option brings out the attached bays as a single SFF8087. For a 12-drive SATA array, this is an ideal choice because there is no contention on the 24Gbps link and the cabling is stupid-simple. Very attractive option. For a 24-drive SATA array, I still think this is probably just fine because you're not likely to actually hit contention issues.

The BE26 (or 12Gbps BE2C) option adds a secondary expander onto the attached bays, making the SAS secondary ports available. This is useless on a SATA array, but if you're deploying SAS drives and you want the multipath capabilities, this is your beast.

External Shelves

External shelves fall into two general categories, ones with controllers and ones with expanders. Do not try to use one with a RAID controller built in. They'll just be problematic under ZFS. An external drive shelf that has an SAS expander in it, however, is very straightforward and may be attached in a manner similar to any other SAS expander.

Note that external shelves introduce a significant risk in the form of power catastrophes. If your shelf powers off but your server doesn't, this can be destructive to the pool.

Sidebands

SAS multilane cables may also include support for sideband signalling, or you may have discrete cabling for such. This is a way for the backplane and the RAID controller, or mainboard, to indicate status, such as failed drive indication. This isn't generally useful in FreeNAS, which lacks software support for this murky and often arcane area of hardware design. For example, a RAID controller with sideband support and a compatible backplane can support features like "Identify Drive" or "Drive Fail" to identify a specific drive. In a reverse breakout cable scenario, four single SAS lanes from a mainboard plus an SGPIO header might connect to a single SFF8087. Discussed somewhat further at ftp://ftp.seagate.com/pub/sff/SFF-8448.PDF I2C is also an option. It should still be possible to manually use "sas2ircu" or "sas3ircu" to help identify drive bays on a properly configured LSI HBA based system, for example "sas2ircu 0 display" and "sas2ircu 0 locate X:Y ON" to activate the locate LED.

HBA Crossflashing

HBA cards generally begin life as a low-end RAID card, and may be able to do things such as RAID5, but without caching. ZFS is incredibly demanding on I/O, and needs to be able to see drives directly for SMART. While there are many RAID cards that connect drives to your server, such as those supported by the CISS driver, many do not perform as needed under heavy loads or adverse conditions. Some, like the 3Ware 9K's, are known to stall the controller during certain error conditions, potentially dropping drives out of an array. Please do not make the mistake of thinking that you can pick any random thing on it that has PCIe on one side and SATA plugs on the other and use it to attach drives to your NAS. Not only should you pick an LSI HBA, but also you need to crossflash it to IT mode with the firmware that matches the current FreeNAS driver. This combination gives you the same setup that has billions of aggregate problem-free run-hours on other FreeNAS systems. You don't really want to be the guinea pig for that odd RAID card and find out how it shreds your pool when something bad happens.

There is a much more comprehensive sticky about this topic here. What's all the noise about HBA's, and why can't I use a RAID controller?

HBA Airflow

While the LSI HBA's crossflashed to IT are generally a great choice, there is a failure mode. Your typical HBA dissipates 10-15 watts, because it has a little CPU on it. They are designed to be placed in servers that typically have great front-to-back airflow. If you place these cards in a place where there is little to no airflow, they will bake. And if they get hot enough, they may start to corrupt data in transit. This is really bad for ZFS, because typically they control all writes to the pool, so they can be corrupting writes to all drives simultaneously, making for unrecoverable data. This is unusual but has been seen to happen. Please make sure you get airflow over your HBA.
 

Attachments

  • c-sff8087-4sb.jpg
    c-sff8087-4sb.jpg
    17.7 KB · Views: 3,925

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
Very cool!
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
The 12Gbps BExC expander backplanes feature two SFF-8644 connectors for host uplink and another two SFF-8644 for downlink to another backplane.

Since PCIe 3.0 x8 is limited at 7880MBps, you could hook up 36 drives to a single 8port controller without noticing performance issues to any of those drives. Best example is this system: www.supermicro.com/products/system/4U/5048/SSG-5048R-E1CR36L.cfm
and the respective JBOD chassis for reference: http://www.supermicro.com/products/chassis/4U/847/SC847E1C-R1K28JBOD.cfm
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,461
Very interesting, thanks for the writeup! So let me make sure I understand this correctly. Right now I'm using a SuperMicro 826TQ chassis. I have individual SATA cables from four of the motherboard SATA ports to the backplane, and breakout cables from an M1015/9211-8 going to the remaining 8 ports. If I were to replace the TQ backplane with a BE16 backplane (which probably isn't practical, but just for the sake of discussion), it sounds like I'd be able to run all 12 disks using a single SFF-8087 cable from the M1015 to the backplane. Nothing connected to the motherboard ports, and I could connect the other SAS port on the M1015 to a back-panel connector for future expansion to a drive shelf. Is it really that simple?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
You're mistaken:

(which probably isn't practical,

It really is (supposed to be) that simple and practical.

Best practices are to check with Supermicro support to find out if the backplane is actually compatible. I haven't had to do lots of backplane swaps for different technology on the Supermicros.

That process can be a real PITA for some chassis mfrs but Supermicro seems like someone who actually had to deploy servers for a living worked with the engineers who designed their stuff; replacements are not "beginner level" work but it can be done in the field rather than in the shop, so it ain't bad.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,461
I was thinking more economically practical--I saw what looked like the right backplane on eBay, but it cost about as much as I paid for my entire chassis. I might be better off, if I wanted to go this route, to just get the chassis with the BE16 backplane (and if I'm getting a replacement chassis anyway, maybe one with more bays...). But leaving finances aside, it sounds like this should work. I don't know how likely I am to outgrow 12 bays any time soon, but it's definitely something to consider.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,461
If I'd bitten the bullet 2 weeks ago and bought 6-TB WD Reds (for $260), I'd be a little annoyed at this. As it is, though, cool!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
If I'd bitten the bullet 2 weeks ago and bought 6-TB WD Reds (for $260), I'd be a little annoyed at this. As it is, though, cool!

Nah, they're actually shingled, so they have to be written in large blocks, much like SSDs. You can't have your cake and eat it, yet.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Which some people consider perfect for ZFS.

Unfortunately, they don't make it clear what size the pages (for a lack of a better term) are, so it could either work fine with ZFS' defaults or be a fragmented mess of catastrophic proportions.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
what size the pages (for a lack of a better term)

I hereby deem them shingles.

are, so it could either work fine with ZFS' defaults or be a fragmented mess of catastrophic proportions.

I hear there's a vaccine for it. ;-)

But seriously, it seems to fit in with the continued evolution of hard disks towards what might be called "nearline" storage if that term hadn't already been adopted by marketers looking to make their low-end SAS drives sound less attractive.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I look forward to other people trying them. But I'm sure if they were on the shelf locally one would come home. Unfortunately feeling more prejudiced against seagate than usual these days. Consumer grade "recycled" tech is tough for me to get excited about. But the magic is the pressure it will put on the other guys to drop $/TB. THAT I LOVE.

Go team shingle. ;)
 

j_r0dd

Contributor
Joined
Jan 26, 2015
Messages
134
Does anybody have any bad experience with the Lian-Li SATA backplanes? I am going to purchase PC-Q26B once it is back in stock in the US, but to be honest not sure if I can trust their backplanes. Maybe I'm wrong. Just not sure if I should purchase the extra Lian-Li backplanes for the case or ditch them altogether. Maybe somebody has any experience installing the Supermicro backplanes in a non-Supermicro chassis?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Does anybody have any bad experience with the Lian-Li SATA backplanes? I am going to purchase PC-Q26B once it is back in stock in the US, but to be honest not sure if I can trust their backplanes. Maybe I'm wrong. Just not sure if I should purchase the extra Lian-Li backplanes for the case or ditch them altogether. Maybe somebody has any experience installing the Supermicro backplanes in a non-Supermicro chassis?

The ones they sell for (and include with) that case are not made for hot plugging and will cause trouble if you try to do so. That's the only catch.
 

j_r0dd

Contributor
Joined
Jan 26, 2015
Messages
134
The ones they sell for (and include with) that case are not made for hot plugging and will cause trouble if you try to do so. That's the only catch.
Thanks. I had more of a concern for overall stability. Drives dropping because of a shoddy backplane would not be enjoyable. The ease of swapping out a failed drive would be nice though!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Thanks. I had more of a concern for overall stability. Drives dropping because of a shoddy backplane would not be enjoyable. The ease of swapping out a failed drive would be nice though!
Well, nobody's complained yet, and they're just simple passthroughs. Nothing to really get wrong.
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
There are advantages and disadvantages to expanders. A primary advantage is cabling simplicity: if you have a 24 drive chassis with a backplane that uses an expander, you need only a single SFF8087 to attach from the backplane to the HBA. The two main downsides are that those 24 drives then share the 24Gbps that's available on a SFF8087, and that in some cases some specific SATA disks have been known to not play nicely and have caused problems for other attached devices on a SAS expander.

As a matter of throughput, a typical modern hard drive can push 125-150MBytes/sec (that's about 1-1.25Gbps) so if you load up 24 disks * 1.25Gbps, you do exceed the 24Gbps that the multilane is capable of. This, however, assumes that you are doing sequential access to all drives simultaneously. That is unlikely at best.

Hi jgreco,

Great reading, helped a lot understanding SAS expanders :)

From my understanding of reading this, the only thing that I can think of that would be constrained by the maximum throughput in this scenario, would be the zfs scrub. Does this sound correct or am I wrong in this regards ?

In regards to SATA drives that may not be compatible with the SAS expander, would this be a risk for newer SATA drives that are designed for NAS storage ?

Cheers,

Craig
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Well, ZFS scrub, resilver, and anything that was reading data at maximum speed from your pool. In practice ZFS usually seems to not max out the system anyways.

SATA drives are *supposed* to be compatible but in practice sometimes things that are supposed to work don't. This isn't so much a matter of whether drives are designed for NAS storage as it is a matter of how well they conform to spec. Lots of people have used lots of different SATA drives just fine with SAS, expanders included.

My suspicion is that this has gotten substantially better as the number of companies manufacturing drives has dwindled.
 
Joined
Jul 13, 2013
Messages
286
Thank you thank you thank you! You (well, this post of yours) are my reward for following the rule (and also best practice) of searching before posting.

Having read this, I'm now more confident than before that my solution to expanding the capacity of my low-use NAS is some kind of "shelf", and I'm starting to get more familiar with the terminology in this area. Still a lot to learn, but getting started is a big step.
 
Status
Not open for further replies.
Top