What can I replace an Ableconn PCIe 2.0 to SATA III card with?

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
My motherboard (full info in sig below) has only 6 SATA ports and I need 11. I've been using an Ableconn PEX10-SAT PCIe 2.0 to SATA III card for the 5 additional SATA ports I need, but I've tracked various problems with drives spontaneously dropping out of pools and CAM timeout errors down to this card. The errors occur mainly under heavy load, FWIW. Here's the latest one:

Code:
dmesg | grep -e ada11

ada11: Serial Number XXXXXXXX
ada11: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada11: Command Queueing enabled
ada11: 9537536MB (19532873728 512 byte sectors)
(ada11:ahcich28:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 08 60 fe 3f 40 8c 04 00 00 00 00
(ada11:ahcich28:0:0:0): CAM status: Command timeout
(ada11:ahcich28:0:0:0): Retrying command
ada11 at ahcich28 bus 0 scbus24 target 0 lun 0
ada11: <ZZ00000000000-ZZZZZZ SC60> s/n XXXXXXXX detached
GEOM_MIRROR: Device swap0: provider ada11p1 disconnected.
(ada11:ahcich28:0:0:0): Periph destroyed
ada11 at ahcich28 bus 0 scbus24 target 0 lun 0
ada11: <ZZ00000000000-ZZZZZZ SC60> ACS-3 ATA SATA 3.x device
ada11: Serial Number XXXXXXXX
ada11: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada11: Command Queueing enabled
ada11: 9537536MB (19532873728 512 byte sectors)


I obviously can't go on like this, but I don't know exactly what I can replace the Ableconn card with. I see things about LSI cards and expander cables here, but I don't know the first thing about them,.

Would this card be appropriate, and would it work with my motherboard (an MSI H270 PC MATE MS-7A72, which among others has a PCIe 3.0 x16 slot plus a slot simply referred to as "PCI1", while the LSI card requires an "x8 PCI Slot"). And is the card compatible with 10 TB drives (I haven't been able to find that information?

And would two of these cables be the right choice?

Thanks in advance.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have used this model card several times and found it both economical and reliable

Drive Controller: Dell H310 6Gbps SAS HBA LSI 9211-8i P20 IT Mode
https://www.ebay.com/itm/162834659601
Price: US $56.55

To go with the HBA above, you will need:

Drive Cables: Lot of 2 Mini SAS to 4-SATA SFF-8087 Multi-Lane Forward Breakout Internal Cable
https://www.ebay.com/itm/371681252206
Price: US $12.99

You can pay more, but the items on Amazon are not better.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
Thanks very much!
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79

Thanks so much! I've read that now, as well as this post, but there's one thing I can't figure out regarding flashing the LSI cards.

It seems many, many people jump through a million hoops to hack together a bootable DOS USB stick in order to flash the card with the LSI IT mode firmware, but FreeNAS has sas2flash built in. Is there any reason at all not to just SSH into my FreeNAS server, download the LSI firmware, and upload it with sas2flash -o -f {firmware_file.bin}?

Also, it's not clear from the oodles of forum threads if FreeNAS's sas2flash can erase the firmware, a la sas2flash.efi -o -e 6. Is this actually necessary, and if so, can FreeNAS do it?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thanks so much! I've read that now, as well as this post, but there's one thing I can't figure out regarding flashing the LSI cards.

It seems many, many people jump through a million hoops to hack together a bootable DOS USB stick in order to flash the card with the LSI IT mode firmware, but FreeNAS has sas2flash built in. Is there any reason at all not to just SSH into my FreeNAS server, download the LSI firmware, and upload it with sas2flash -o -f {firmware_file.bin}?
It usually doesn't work. I think it is because the driver is active. I am not sure how to do it, but if you could deactivate the device, first, you probably could do it that way. I have only flashed around six of these cards myself, but I find it quite easy to boot from a USB drive with the software on it.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
It usually doesn't work. I think it is because the driver is active. I am not sure how to do it, but if you could deactivate the device, first, you probably could do it that way.

That makes sense. No idea how to deactivate it, though.

I have only flashed around six of these cards myself, but I find it quite easy to boot from a USB drive with the software on it.

Only six, huh? A veritable newby! ;-)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks so much! I've read that now, as well as this post, but there's one thing I can't figure out regarding flashing the LSI cards.

It seems many, many people jump through a million hoops to hack together a bootable DOS USB stick in order to flash the card with the LSI IT mode firmware, but FreeNAS has sas2flash built in. Is there any reason at all not to just SSH into my FreeNAS server, download the LSI firmware, and upload it with sas2flash -o -f {firmware_file.bin}?

Also, it's not clear from the oodles of forum threads if FreeNAS's sas2flash can erase the firmware, a la sas2flash.efi -o -e 6. Is this actually necessary, and if so, can FreeNAS do it?

If you already have a card that's in IT mode but just needs to go from 15 or 16 to 20.00.07.00, the FreeNAS tool will do that. I suggest disconnecting the drives before you do this. I am paranoid but I am in the biz and paranoia has paid off for me more times than I care to count.

If you need to crossflash the card, the FreeNAS tool is not suitable IMHO.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
If you already have a card that's in IT mode but just needs to go from 15 or 16 to 20.00.07.00, the FreeNAS tool will do that. I suggest disconnecting the drives before you do this. I am paranoid but I am in the biz and paranoia has paid off for me more times than I care to count.

If you need to crossflash the card, the FreeNAS tool is not suitable IMHO.

Thanks so much for the heads-up! You've likely saved me quite a bit of time!
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
Just a quick update, mostly for anyone searching.
  • The FreeNAS sas2flash util is only really good for getting info on your card. It failed at flashing and erasing.
  • The LSI 9211-8i card was of course easy to install physically. Just make sure you push the breakout cables all the way in. This is easier to do before you install the card, as it requires a good bit of force.
  • Contrary to the product description and tons of self-assured people who proclaim, "Of course that card is already in IT mode -- it's an HBA!", the card wasn't in IT mode, so I had to flash it.
  • Flashing the card was actually quite easy, once I figured out what to do.
  • Figuring out what to do is stunningly difficult, as there are a huge number of posts, blogs and tutorials out there, and almost all of them are many, many years out of date and thus are full of bad information, dead links, and so on.
  • How to flash?
    • Download the appropriate IT firmware and installer zips from the Broadcom site (LSI has no site; I assume it was bought out).
    • Extract the actual firmware (2118it.bin in my case) and flashing util (sas2flash.efi) to a FAT32 USB thumbdrive. It doesn't have to be bootable.
    • Insert USB thumbdrive, (re)boot, go into UEFI (BIOS), select "Boot into EFI shell" or similar.
    • Type fs0: to mount the USB drive (this could be different in some people's case).
    • Type sas2flash.efi -o -e 6 to erase whatever it is that has to be erased in order to be able to flash. You can't crossflash otherwise.
    • Type sas2flash -o -f 2118it.bin (changing the final argument to the name of your firmware).
    • Wait 10-15 seconds and you're done!
    • Reboot.
If I had to do this again, it would take less than two minutes now that I know what to do.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
  • How to flash?
    • Download the appropriate IT firmware and installer zips from the Broadcom site (LSI has no site; I assume it was bought out).
    • Extract the actual firmware (2118it.bin in my case) and flashing util (sas2flash.efi) to a FAT32 USB thumbdrive. It doesn't have to be bootable.
    • Insert USB thumbdrive, (re)boot, go into UEFI (BIOS), select "Boot into EFI shell" or similar.
    • Type fs0: to mount the USB drive (this could be different in some people's case).
    • Type sas2flash.efi -o -e 6 to erase whatever it is that has to be erased in order to be able to flash. You can't crossflash otherwise.
    • Type sas2flash -o -f 2118it.bin (changing the final argument to the name of your firmware).
    • Wait 10-15 seconds and you're done!
    • Reboot.
If I had to do this again, it would take less than two minutes now that I know what to do.

Just so you're aware, these steps are just as wrong as a lot of others. The problem is that there isn't actually a golden path sequence of steps, because there are so many combinations of mainboards and firmware caveats. This will not work, for example, on Dell controllers that need to be taken over to LSI from the Dell firmware, because the tools to do that do not EXIST as UEFI. The DOS tools work on some boards but not others, the UEFI tools work on some boards but not others, etc. We actually use one type of machine to do certain portions of the process (the Dell->LSI translation) and then another type of machine to do the rest.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
Just so you're aware, these steps are just as wrong as a lot of others.

As I said in my first post, I'm using this card, not a different card. For this card, the steps I have listed are indeed the correct steps, they worked perfectly, and they save anywhere from 5 to about 15 unnecessary steps given incorrectly in tutorials around the net, some nearly a decade old.

I of course have nothing to say about the uncountable cards I didn't flash.


We actually use one type of machine to do certain portions of the process (the Dell->LSI translation) and then another type of machine to do the rest.

E-gads!

I'm curious, though -- I assume your "we" is some sort of company. If that's the case, I'm surprised you build servers with used parts presumably sourced off Ebay and the like. Is the difference between new and used that big? I guess I'm just used to seeing businesses pay huge premiums to mitigate extremely small risks, and used hardware would seem to be a pretty big one.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
As I said in my first post, I'm using this card, not a different card. For this card, the steps I have listed are indeed the correct steps, they worked perfectly, and they save anywhere from 5 to about 15 unnecessary steps given incorrectly in tutorials around the net, some nearly a decade old.

Again, they are not a guaranteed recipe for success. The trite proof being that the steps you gave won't work on a non-UEFI host. The steps "given incorrectly in tutorials around the net" are just as correct as the ones you gave; they happened to be the ones needed for those people to experience success with their particular combination of mainboard and card. If you want to argue that some of them might not be optimized and might include a superfluous step or two, you will invariably be correct, but this is just because most people do not have a dozen cards and the time and patience to find out what is actually the shortest workable sequence.

I'm curious, though -- I assume your "we" is some sort of company. If that's the case, I'm surprised you build servers with used parts presumably sourced off Ebay and the like. Is the difference between new and used that big? I guess I'm just used to seeing businesses pay huge premiums to mitigate extremely small risks, and used hardware would seem to be a pretty big one.

For my own businesses, I routinely source both new and used gear depending on various factors. There's a lot of equipment that comes to market because it was leased by its original "owner" and the lease came to an end. There is not likely to be anything wrong with the gear. I picked up some SolarFlare SFN-6122F with optics for $28/each from one of the recyclers who parts'es up old lease gear recently. This is normally a $1000 card. In other cases, gear is used in a data center somewhere and is pulled during a refresh. For example, I recently picked up a pair of Dell PowerConnect 8024F for $400 each; these were MSRP ~~$10,000 switches in 2010 and it was considered cheap AT THAT PRICE IN 2010!

So one of the things is that when it's your business, you can't just wander into the CFO's office and say "hey Frank I need $50K for new servers" -- as the guy signing the checks, I *know* that most of our workloads don't require the latest pricey CPU's and DDR4, or brand-new controllers, or enterprise-grade HDD's/SSD's. An Intel 1.2TB DC3710 runs around $620, so in RAID1 that's $1240 for 1TB, but I can get a WD Blue and 860 Evo for $99/each for a ~$200 RAID1 solution. I know that based on our workloads that these will last "less" than the Intels, but if I have to replace them in 2 or 3 years, that's still a LOT cheaper.

And I *do* the redundancy, which really does a much better job of mitigating risk than buying new does. If I buy a new switch at $10,000, it's virtually guaranteed that I'll need to do firmware upgrades especially during the hot period where it's newly released and there's a bug release every month or two. So if my requirement is continuous uptime, I need two of them anyways. Redundancy mitigates the slightly increased risks of having older gear, which admittedly is more likely to experience a failure at some point, but just like RAID1 with servers, not likely to happen both at the same time.

I also have a bunch of cheapskate customers. For example, we do the server refurb and recertification for NTP.ORG here, and because open source software projects are poorly funded, the project is largely reliant on donations, so when they got a stack of Dell R510's donated as FreeNAS hosts, you can be certain that I wasn't buying retail LSI 9211-8i's to put in them but rather finding the cheapest Dell cards that could be made to work.

The "DOA" failure rate I see on used IT gear of the types I'm willing to buy used is less than 1%, and really not much higher as the equipment ages.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I guess I'm just used to seeing businesses pay huge premiums to mitigate extremely small risks, and used hardware would seem to be a pretty big one.
The organization I work for has a problem, due to various policy and law that must be complied with, such that there are very few opportunities to obtain gear that is not new. We often end up spending a lot more, wastefully, than we could because of being forced to buy new when new is not needed. I completely agree with what @jgreco has said regarding used gear. I have bought as much used gear as I can for work and all my gear at home is used, with the exception of my most recent purchase of hard drives. I have purchased used hard drives before also but due to the hard drives being like tires on a car, they will only last so long, no matter how carefully you drive, buying used ones just shortens the useful life, so it better be a good deal. Anyhow, I suggest used gear all the time on the forum because most of the used gear has years of useful life remaining and even if it fails, and you need to buy another, you are still at an advantage cost wise vs buying new to begin with.
As an example, the system board in my home NAS, if I had purchased the new, equivalent model, would have cost near $600. I paid $150 for a used part. Even if I need to replace it four times, I still have not lost any money. An even better example might be the CPU as the new, equivalent model would have been near $1000, but I purchased a used, older model processor for around $100. I have only been doing IT support type work since 1996, but in that time I have only seen about three processors actually fail and two of those were from overclocking. I don't anticipate that the Xeon in my server will fail in the life of the system, but if it did, and I needed to spend another $100 to replace it, I would still have saved $800 vs the price of new.
It doesn't even begin to make sense to not buy used, even for a business, but many organizations have purchasing policies that prevent buying used gear. In most circumstances, buying new is just throwing money away and I advocate against it. Sure, those companies that sell new hardware need to make money to pay for their engineering staff, but they can make that money on someone else, not me. I put in extra effort to not waste the money of the organization where I work because they are spending tax dollars and I try to be a good steward of other peoples money. Not everyone thinks that way, but there is just no reason to spend $400 for something that can be had for $40. For what, so I can have a warranty? For the cost difference, it makes more sense to buy two or three of the used item and have spares. A spare part fixes a down server faster than a warranty, every time.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For the cost difference, it makes more sense to buy two or three of the used item and have spares. A spare part fixes a down server faster than a warranty, every time.

I've been a proponent of redundancy for years. One of the reasons I departed the Sun/ATT/HP/etc ecosystem in favor of FreeBSD a quarter of a century ago is that PC hardware is priced at a fraction of server hardware (and I mean real servers, not Intel-based PC-architecture servers). It is cheaper to have two of a cheap-ish thing than one extremely expensive thing that will still occasionally have downtime due to patching or maintenance. Redundant Array of Inexpensive Servers. (Note: I subscribe to the original RAID acronym, don't bother to correct me with the "independent" crap foisted by drive manufacturers trying to steer people to RAED, E="expensive").

Part of it is that I am more willing than average businesses to do validation and testing here in-house. I don't mind setting something up, since vendors never seem to sell exactly what I want anyways, and then let it sit burning in for weeks or months to make sure it works, whereas most businesses open the vendor's cardboard and shove the gear right in a rack, and spin it up for production workloads in a day or maybe a week.

I *know*, I'm so weird.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
whereas most businesses open the vendor's cardboard and shove the gear right in a rack, and spin it up for production workloads in a day or maybe a week.
That is what they want here. They want it out of the box and working in a week or less. I kind of hate that. I got it written in the contract on the last server that we bought that the vendor was supposed to do burn-in testing before shipping it to us, but I have my doubts that they actually did it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That is what they want here. They want it out of the box and working in a week or less. I kind of hate that. I got it written in the contract on the last server that we bought that the vendor was supposed to do burn-in testing before shipping it to us, but I have my doubts that they actually did it.

Check the ILO/IPMI log? Not proof that they DIDN'T do it, but could make the case that they did.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
Again, they are not a guaranteed recipe for success. The trite proof being that the steps you gave won't work on a non-UEFI host.

Well, that kind of went without saying, but I fear we're beating a dead horse here.

The steps "given incorrectly in tutorials around the net" are just as correct as the ones you gave; they happened to be the ones needed for those people to experience success with their particular combination of mainboard and card. If you want to argue that some of them might not be optimized and might include a superfluous step or two, you will invariably be correct...

I meant that there are some insanely convoluted things out there about flashing this card on UEFI mobos like mine. Like, 15-steps convoluted. But anyway...

For my own businesses, I routinely source both new and used gear depending on various factors. There's a lot of equipment that comes to market because it was leased by its original "owner" and the lease came to an end. There is not likely to be anything wrong with the gear. I picked up some SolarFlare SFN-6122F with optics for $28/each from one of the recyclers who parts'es up old lease gear recently. This is normally a $1000 card. In other cases, gear is used in a data center somewhere and is pulled during a refresh. For example, I recently picked up a pair of Dell PowerConnect 8024F for $400 each; these were MSRP ~~$10,000 switches in 2010 and it was considered cheap AT THAT PRICE IN 2010!

That's some brilliant savings right there! I had Ebay in mind when I asked my question (since most people seem to buy their LSI HBA cards there), but you obviously have more reliable sources. Do these kinds of places sell to the general public?

And I *do* the redundancy, which really does a much better job of mitigating risk than buying new does.

That's a very good point!

Thanks for your response.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
Anyhow, I suggest used gear all the time on the forum because most of the used gear has years of useful life remaining and even if it fails, and you need to buy another, you are still at an advantage cost wise vs buying new to begin with.

That's quite persuasive. My worry is that sources like Ebay (and even Amazon, increasingly) have a serious problem with counterfeit tech gear, as well as broken gear. What sources do you recommend for this?

I put in extra effort to not waste the money of the organization where I work because they are spending tax dollars and I try to be a good steward of other peoples money.

Kudos to you, sir!
 
Top