Adding a crappy little RAID controller (LSI, etc) to your massive big RAID controller (ZFS) creates a very bad layering. There are some potential benefits but also significant downsides. If you can sufficiently address all the downsides, and cyberjock has discussed at least some of them, then you can do as you please, but with the caveat that "you've been warned."
I never thought that my post would imply that I supported that configuration. Was more a matter of curiosity seeing that it's a configuration that was chosen by an organisation who obviously put a lot of thoughts into it.... You would have noticed that I had addressed the post to you directly seeing that you are obviously very experienced on this matter and not just on a theoretical basis...
To me the main factor in *not* using a hardware raid card, and use a simple HBA one is that more often than not; you're stuck to particular vendor has they all have their proprietary ways of doing things:
I remember a few years back; our main server (ZFS) was being backed up to a mirror machine using zpool export/import. The mirror didn't have enough onboard SATA; so I had used a Highpoint RocketRaid 8 ports: because they were cheap enough, and they were supported by FreeBSD without messing around (unlike 3Ware or LSI that always required to rebuilt the kernel from source).
We had a hardware failure on multiple drives at once; that made the zpool unrecoverable . I couldn't use the mirror hardware has it didn't have the hardware capabilities to handle the load. So I just took the drives out of the mirror machines, put them in the primary server and was hoping that was the end of it...
Too bad : the drives had been used as JBOD on the RocketRaid and wasn't working with the onboard SATA of the main server; and there was no spare slot to use the rocketraid...
So I had to purchase new drives and perform another zpool export/import across gigabit link which caused a 2 days downtime...
I won't ever go through that again...
As to my answer to cyberjock, I was only replying to the content of his message: that SMART couldn't be used once the drives were connected to a hardware card. Which is just not correct, and personally I've never had any issues monitoring the disks health status either via smartctl or the proprietary raid controller utility.
Prior the port of ZFS to FreeBSD; my controller of choice was 3Ware RAID controller: the 3ware utility on FreeBSD comes with a little daemon that provide a web interface: where you can set various alerts including emails for when a particular SMART registers reaches a defined value.
This has always worked just fine for my use.
And all the RAID cards I've ever used (I'm talking real hardware RAID card, not onboard intel software raid and the like) has provided one way or another to check the SMART status. But I only ever used hardware raid controller that I knew before hand where perfectly supported by the OS they were going to be used with
When I posted my question to you, I was entirely focusing on the performance side of things... Only to get an answer about SMART, which once again triggered a long lecture about items I already entirely agree with and that you mentioned in your first post...
I don't want to enter into a long, fruitless argument once again, about points that are ultimately agreed upon only because it started from a misunderstanding or different technical terms were used to which one party or the other was unfamiliar with.
I have chosen the X10SL7-F supermicro motherboard for my last system that comes with its own LSI onboard controller, only because I have confirmed that it can be flashed with an alternative firmware that make it a plain HBA. And that's how it will be used.
It's easier to set and source than getting a 2nd-hand IBM cards of ebay that may or may not be available (plus I never buy any computer gear 2nd hand as a rule anyway).
The card I usually use otherwise is the LSI SAS 9211-4i if there's not enough onboard controllers for when performance isn't too critical