Recommendations for non-LSI HBAs? Or HBAs that don't overheat without active cooling?

rdfreak

Dabbler
Joined
May 22, 2019
Messages
12
A while ago I bought an LSI 9300 HBA with two MiniHD connectors. That HBA was a bit glitchy, presumably because of the lack of active cooling, which I understood is basically required for a non-datacenter computer. And I couldn't make use of any fan on the HBA, because even the thinnest one (1cm) would have been blowing air literally onto the PSU. Not only was the installation on the chassis difficult, it didn't make a difference.

So, my question is, are there any SAS HBAs that don't require active cooling? Which I think could mean any non-LSI HBAs? Said HBA should make use of 8 PCIe lanes at most. Not sure if it matters, but I'm aiming to make full use of a 10 Gbps NIC, and I don't want the HDD pool to be a bottleneck.

Also, is there any way this adapter could be used for a pool: https://c-payne.com/products/pcie-slimsas-host-adapter-x8-to-8i-straight
I'm aware it's not a controller, but if it's attached to a PCIe x8 slot, would TrueNAS be able to identify the HDDs attached to it via the SlimSAS output?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So, my question is, are there any SAS HBAs that don't require active cooling?
Nope.
Which I think could mean any non-LSI HBAs?
Nope.
Also, is there any way this adapter could be used for a pool: https://c-payne.com/products/pcie-slimsas-host-adapter-x8-to-8i-straight
I'm aware it's not a controller, but if it's attached to a PCIe x8 slot, would TrueNAS be able to identify the HDDs attached to it via the SlimSAS output?
Don't call the connector SAS-anything, that only serves to create confusion. It's SFF-8654 8i and it carries eight pairs of differential pairs, nominally good enough for PCIe 4.0 and SAS 12Gb/s (eight lanes in either case).
That doesn't mean the adapter will magically support SAS disks - it won't. You would use that adapter to connect PCIe devices using a cable - that means mostly SSDs, with a suitable SFF-8654 to U.2 or SFF-8643 cable, or all sorts of weird and wonderful risers for niche applications.
If you want two NVMe SSDs, that adapter would be suitable, provided that the motherboard can bifurcate the x8 slot into x4/x4. It will not help in any way if you want SAS.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
active cooling, which I understood is basically required for a non-datacenter computer. And I couldn't make use of any fan on the HBA, because even the thinnest one (1cm) would have been blowing air literally onto the PSU

This is a misunderstanding of the situation. Active cooling is not required. Airflow, on the other hand, is.

In a typical rack mount, you have a fan bulkhead between the front drive bays and the rear mainboard section. This causes a strong front-to-back airflow, as long as you have a well vented mounting bracket on the rear end.
1675012161845.jpeg

I couldn't make use of any fan on the HBA, because even the thinnest one (1cm) would have been blowing air literally onto the PSU

Mounting a fan on the HBA is generally foolish; you are causing a tiny fan to bake, and when it inevitably fails in a few years, your HBA will toast, which is really bad, because it is one of the things that can destroy a ZFS pool (HBA feeding semirandom gibberish out to the pool). What you really want is to have ideally at least two high quality fans pushing air across the PCIe card area. You want the static pressure inside the case to be higher than outside the case as well, and then the bracket to be something like the Supermicro BKT-0066L / 0174L with the large easy-flow holes to encourage airflow along the card and out the back.

So, my question is, are there any SAS HBAs that don't require active cooling?

So again I think the insertion of the word "active" corrupts the question. They all require cooling. None of them require active cooling. Most of them require a well-designed cooling strategy.

Which I think could mean any non-LSI HBAs?

Well, it's been more than ten years into this project, and credible alternatives have not appeared. The driving forces behind the HBA selection logic are generally documented in this article:


If we look realistically at the big picture, SATA has died an untimely death due to the inability of the industry to agree to a 12Gbps SATA standard; they didn't even try that hard. They more-or-less decided (correctly) that it didn't matter for HDD, and for SSD, there was 12Gbps SAS. But that in turn died an untimely death due to NVMe. Back before the doomball started rolling, we had, at least: Adaptec, PMC-Sierra, Microsemi, 3Ware, LSI, Broadcom, ASMedia Technology, and several others. Through various mergers and acqusitions, this once diverse list of players has imploded, and there really isn't a bright future to the SATA/SAS chipset business. So we also don't expect there to be new magic solutions.

If you are going SATA-only, there might be an argument to be made to try an ASMedia controller. They are known to be decent as long as you don't get a knockoff or one that involves a SATA port multiplier.

 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think all those are under the Microchip roof these days.

Yes, though I don't know that any of them are doing anything of relevance.


No, sold off by AMCC, the 3ware RAID group was merged with LSI, who rapidly killed the 3ware product line. AMCC then continued its slow proud death march to irrelevance for a number of additional years. You might remember AMCC as the company that went by a variety of names, including "AppliedMicro", which caused a boatload of confusion with "AMD". They had famously picked up a bunch of PowerPC/PowerISA intellectual property and engineering talent from IBM back in the early 2000's.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yes, though I don't know that any of them are doing anything of relevance.
Microchip has been investing meaningfully in the SAS space, but their only apparent major customer is HPe, who clearly felt that their line of servers would benefit from crappier SAS than Dell and Lenovo. They seem to have adopted Adaptec's legacy of good hardware with awful software while simultaneously aping LSI/Broadcom's strategy of eye-watering prices and frequent new products with few relevant changes - never mind actual improvements.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Microchip has been investing meaningfully in the SAS space, but their only apparent major customer is HPe, who clearly felt that their line of servers would benefit from crappier SAS than Dell and Lenovo. They seem to have adopted Adaptec's legacy of good hardware with awful software while simultaneously aping LSI/Broadcom's strategy of eye-watering prices and frequent new products with few relevant changes - never mind actual improvements.

Ah, I guess that explains why Microchip has made my life hell here on the forums explaining to all the HP folks why their CISS-based HBAs and RAID controllers don't work well.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
@rdfreak I remember having a conversation with you in the past about some creative PCIe splitting devices from that same seller. If you're having cooling challenges then ultimately that's something that would need to be solved for any HBA - they do all have controllers with heatsinks and expect some degree of airflow.

They don't necessarily need "active cooling" as outlined in @jgreco 's post - but sufficient linear airflow. Do you have a model number or pictures of the case design to illustrate the PCIe area arrangement?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One thing came up recently here in the forums, someone had a LSI SAS/SATA controller that was running hot and throwing occasional errors. He removed the heat sink off the LSI chip, thoroughly cleaned the old thermal compound off the chip and heat sink. Then applied new, (and potentially better quality), thermal compound on to the chip and re-applied the heat sink.

This solved the problems the user was having, and he jokingly said that he probably would have to repeat the process in some years, (the amount of years I do not recall).


Lots of the home & small office TrueNAS users buy used LSI SAS/SATA controllers. (Some with other branding like IBM, HP or Dell.) Which means those cards might be more than a few years old. So replacing the thermal paste / compound might make things last longer / work better.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
the amount of years I do not recall

Repasting of devices with low quality paste (which generally includes manufacturer-applied pastes) should be considered after two to seven years; there's so much variation in pasting quality that you are better off just redoing it with a high quality paste.

High quality pastes like Arctic Silver 5 are definitely good for at least five years, and get more effective through their first few years, as the material cures and the material goes through heat/cool cycles. Observationally, I've pulled heatsinks off 10 year old gear where the material was still tacky, not dried out, and was still working fine.

So my suggestion would be to just repaste it with a high quality paste like AS5 and then probably never have to worry about it again.
 

rdfreak

Dabbler
Joined
May 22, 2019
Messages
12
Wow, so many replies, and only one of them comes from a non-moderator :grin:

Well, Adaptec are a no-go. Or rather, they have nothing better to offer - according to their selling sheets, they require similar airflow as LSI.

@jgreco point taken on the airflow strategy. I'm not sure whether my chassis is good enough

@HoneyBadger that memory recall is very impressive :grin:. The case is a Silverstone DS380:
ds380-back.jpg


The motherboard is a mini-ITX one with only one PCI slot which is positioned at the lower of the two PCI brackets of the chassis. The upper bracket is unoccupied, and that's how I had space to mount a small fan on top of the LSI heatsink. As you can imagine, the fan blew air straight into the PSU. I don't know whether the chassis has enough airflow. I read the DS380 wasn't particularly good with cooling the HDDs, but there is a workaround I'm fine with. I don't know whether it will do good with supplying sufficient airflow to the HBA.

@Arwen I did replace the thermal compound of that SAS HBA when I tried fixing it with a fan, but it didn't help :/. In fact, I think it's very likely that HBA was faulty somehow, but it's too late to speculate about it anyway. It was returned to Amazon months ago. It's just that I still have an Exos 18TB self-encrypting SAS drive I can't find anyone to resell to in my country (maybe I'll get lucky after a few weeks on Amazon), so I thought the quickest course of action would be to have a pool with 7 SATA + 1 SAS HDD. That SAS HDD will be replaced sooner or late‍r :rolleyes:, and I can either get a SATA controller, or remain with the SAS HBA when that time comes. But buying a SAS HBA from ebay, only to find out it's not usable in my build seems very risky, hence why the post and quest to find a colder HBA.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
If we look realistically at the big picture, SATA has died an untimely death due to the inability of the industry to agree to a 12Gbps SATA standard; they didn't even try that hard. They more-or-less decided (correctly) that it didn't matter for HDD, and for SSD, there was 12Gbps SAS. But that in turn died an untimely death due to NVMe.

SAS is still under active development, the 24Gbps SAS4 specification was completed back in 2017, and products using it are now becoming available, including SSD's. I'm not sure SAS4 will support 10 meter cable lengths like SAS3, but I expect something similar. I've heard PCIe 4.0 is supposed to support 15 meters, but I've never seen anything longer than about 1 meter. I very much doubt we will see a pure PCIe NVMe monster multi-rack storage config anytime soon.

SAS-5 is reportedly under development as well, targeting 45Gbps...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
SAS is still under active development,

"Active development" implies active development.

In 1993, we had 10Mbps ethernet. In 1996, we had 100Mbps. In 1999, we had 1Gbps. In 2002, we had 10Gbps.

That's active development. Three order of magnitude jumps within a decade, and we're now looking at 800Gbps two more decades later. Which is admittedly a slowdown, but still, a practical evolving technology.

SAS is going nowhere fast. It hit the dialup modem problem. We went from 2400bps to 57.6Kbps in about 14 years, but then hit a limit -- underlying inter-CO telephony was underpinned by T1 technology with 64Kbps channels, and it became damn near impossible to get "faster" without using an alternative technology stack such as DSL or ISDN. This turned out to be not-a-problem as everyone wanted high speed Internet, so now a quarter of a century later, we have VDSL2, cable DOCSIS3, various PON FTTH solutions, and some smaller number of quirky solutions such as Starlink. But dialup modems are nearly dead. The driving forces behind them are gone. Once we hit 57.6Kbps, there was no significant room in between there and the theoretical 64Kbps limit to salvage.

This is really how I see the HDD marketplace. We've hit some practical limits. My hard drives today are not substantially faster than the ones I had three decades ago in terms of seek or rotation. Only sequential I/O and capacity have improved.

Many are betting on the HDD market to die off as flash overtakes it, so I'm also not expecting any white knight solution to charge in to replace SAS. More likely, something truly useful like NVMe-oF will roll in and put SAS out of its misery. I'm fine with that; SCSI has been around a hell of a long time, and it would be good to shake things up.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
SAS is on borrowed time and chugging along on the inertia of 15 years of ubiquity in servers, plus enterprise-level support for SATA. The proof is that tri-mode SAS hardware has gained zero traction, with seemingly nobody adopting U.3 backplanes. Actually, I went and researched that last sentence and I was wrong. HPe seems to be using U.3, and they're the only ones as far as I can tell (who would have thought that an insanely-expensive combined PCIe switch and SAS expander would not be outweighed by slightly simpler backplane PCBs? /s).
You might also have noticed that SAS 12Gb/s rolled out fairly quickly, whereas SAS 24Gb/s is still something of a unicorn. The reality is that there is no meaningful market between cheap (SATA) and fast (NVMe), and SAS 12Gb/s is already plenty fast for most applications with SATA disks.

Many are betting on the HDD market to die off as flash overtakes it, so I'm also not expecting any white knight solution to charge in to replace SAS. More likely, something truly useful like NVMe-oF will roll in and put SAS out of its misery. I'm fine with that; SCSI has been around a hell of a long time, and it would be good to shake things up.
There's some talk of NVMe for HDDs (with a single lane per disk, I'd expect) and the spec is being amended to support non-fast fee like disks.
The benefits would be a single storage stack for all formats, getting rid of SAS controllers and standardizing the expander role on PCIe switches, which are more widely available and cheaper. Performance also benefits, but it's a nearly-trivial gain.
The major downside is the need for new backplanes and new HDD chipsets, which means a lot of engineering hours and either less compatibility or higher costs during the transition period. It's tempting to say that SATA 6Gb/s is just good enough forever.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
who would have thought that an insanely-expensive combined PCIe switch and SAS expander would not be outweighed by slightly simpler backplane PCBs? /s

I would have thought a swappable daughtercard. I believe Supermicro already does daughtercards for the SAS expander on its backplanes. One would have thought that HPE would have gone one better.

You might also have noticed that SAS 12Gb/s rolled out fairly quickly, whereas SAS 24Gb/s is still something of a unicorn.

Sorta. SAS has been on a 4 or 5 year cadence, with SAS 3Gbps in 2004(?), 6Gbps in 2009(?), 12Gbps in 2013, and 24Gbps in 2017. SAS-5 is off-target by a few years though. But there hasn't been a big storm of uptake even on 12Gbps. Lots of people are upgrading away from 6Gbps due to stuff like driver obsolescence issues in ESXi, not really any fundamental flaw in the tech or bandwidth limits.

no meaningful market between cheap (SATA) and fast (NVMe)

That's the thing that's killing SAS.

SAS 12Gb/s is already plenty fast for most applications with SATA disks.

And even that is more like shelf interconnect.

It's tempting to say that SATA 6Gb/s is just good enough forever.

Unless we see some NVMe-for-HDD real quick, I'd bet on this outcome. Maybe even if we do see NVMe-for-HDD.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
This is really how I see the HDD marketplace. We've hit some practical limits. My hard drives today are not substantially faster than the ones I had three decades ago in terms of seek or rotation. Only sequential I/O and capacity have improved.

I'm going to disagree. Three decades is a long time, it's hard to even remember what 1993 was like. The seek times are substantially improved, not vastly, but have to be considered in conjunction with the improvements in seek error rate. On rotation... You can get 15k RPM if you want it. The first 10k RPM drive was introduced in 2003 at then massive 36 & 74 Gb capacities. Rotational speeds above 15k RPM could likely have been developed but flash nix'ed the need. Consider... Many DC's today have flywheel UPS's in vacuum bottles on magnetic bearings spinning at 40,000 RPM.

The capacity expansion alone since 2003 is kind of understated, given we have CotS 20Tb drives these days. Three decades ago I was amazed at being able to buy a 1.05Gb SCSI drive on a college student budget for my used USENET scavenged $500 Sun 3/50. Now days I shoot 1Gb/day of photos on a weekend outing...

Many are betting on the HDD market to die off as flash overtakes it, so I'm also not expecting any white knight solution to charge in to replace SAS. More likely, something truly useful like NVMe-oF will roll in and put SAS out of its misery. I'm fine with that; SCSI has been around a hell of a long time, and it would be good to shake things up.

Barring some unexpected problem in the 1nm node area (or WWIII...), I expect magnetic HDD's to survive thru at least 2030, after which flash will completely overtake the market. If SAS continues to play a role, it will be in the larger deployments where you have hundreds if not thousands of devices connected to a server, and the radius of signal integrity needs to be 5 to 10 meters just to reach the JBOD enclosure. The wild cards are how many PCIe lanes will be in the next couple generations of server chips, how far can they reach, and who's going to solve all the switching issues and give us PCIe "expanders" at rack +2 distances?

SCSI itself I doubt will die before 2040... It's kind of written in to the NVMe spec. I'm going to stick my neck out and suggest it's heading to "The COBOL of Storage" status. :smile:
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
And just to kick things back on topic... Everything seems to be getting integral fans these days. I expect HBA's will follow shortly...
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I suspect that 12Gbps & 22.5Gbps SAS, (the later of which tends to be referred to as 24Gbps), will hang around in the Enterprise storage space, for bulk storage using spinning rust, (aka hard disk drives).

This is not so much as the drives themselves need such fast interfaces. It is that you can have fewer lanes per expander. SAS expanders can act like a funnel, which is bad if you try too many high speed devices, like SSDs, on lower number of lanes, and those lanes use lower speed, like 6Gbps.

But, a 22.5Gbps / 24Gbps SAS allows the small end to run faster than the wide end of the funnel. Thus, support more connected devices which may use 6Gpbs, (SAS or SATA), or 12Gbps, (SAS).


In someways it is sad to see SAS fading away.


On a side note, faster RPM disks came out for the Enterprise earlier than 2003. I know, I had a 2GB Cheetah on my old SS20;

1996 – Seagate ships the first 10,000-rpm hard drive, the Cheetah
2000 – Seagate ships the first 15,000-rpm hard drive, the Cheetah X15
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm going to disagree. Three decades is a long time, it's hard to even remember what 1993 was like.

Maybe for you. I remember the 90's quite well, as I was busy arguing heretical points such as disabling atime updates and striping disks at a large stripe size to optimize concurrency, both well documented on the mailing lists, alas my Usenet posts are now very hard to find.

The seek times are substantially improved, not vastly, but have to be considered in conjunction with the improvements in seek error rate.

Substantially improved? A Seagate Barracuda ST32550N (7200 RPM) was seeking at about 8ms. Seagate claims a "3.4/3.8" rating for their last 15K drive. To my mind, that's "double, maybe" for rotation, "half, maybe" for seek. Thirty years. To my mind, that's not a substantial improvement in that time period.

The capacity expansion alone since 2003 is kind of understated, given we have CotS 20Tb drives these days.

Sure, but I already mentioned that. Also the transfer speeds. It takes just a handful of seconds to transfer 1GB of data on a modern drive. I still have a pile of Seagate Hawk's in inventory somewhere. :smile:
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
On a side note, faster RPM disks came out for the Enterprise earlier than 2003. I know, I had a 2GB Cheetah on my old SS20;

1996 – Seagate ships the first 10,000-rpm hard drive, the Cheetah
2000 – Seagate ships the first 15,000-rpm hard drive, the Cheetah X15

I stand corrected. But those where really expensive, not common outside of workstations. Also, I'm not sure the SS20 actually shipped with those, they work obviously, but they were more a UltraSPARC config.
 
Top