So.. no RAID controller that we know of? LOL
Yeah, you know, the unfortunate bit about it all, LSI makes nice hardware but crappy software.
We've got ... I want to say M1015's ... something cross-flashed to IR running boot and direct-attached datastores for ESXi boxes, because redundancy is just a basic requirement around here. So ESXi throws a warn about the datastore being degraded. Look at it with MegaRaid Manager, oh that's right, MRM will actually show the health as "green" on the main page - you have to log in on this awesome bit of damageware to see the actual status. Oh, look, bad disk. Ok, migrate all the VM's off the box, pull the disk, oh joy it's a Seacrate RMA (all our important stuff is SAN/iSCSI so I like to toss less-trustworthy disks in RAID1 as scratch storage). Put new drive in. Wait for migrations. Go twiddle around with MRM and ... oh look, it already started a rebuild on its own.
The cool bit? I *know* that if there had been a standby disk, it would have started rebuilding immediately. As it was, the thing doesn't wait around... the card knows a replaced disk means make it work. And it does. And it did, with no further effort.
The problem bits? Getting a useful notification via LSI's software support infrastructure. On ESXi, turns out, can't really be done (with LSI's MRM that is). You have to *rely* on vSphere to notice and warn you, and getting ESXi set up correctly with the right drivers and all is a Royal Pain. Hell, just *finding* the correct drivers and files and all that is a nightmare. That day I was b****in' about it in off-topic, I lost something like a day trying to get an updated system working, and document it, and get MRM working, because we need to be able to repeat the process for more ESXi boxes. And of course even though MRM is worthless for monitoring, you still need it to chat with the RAID controller while the system is running. Ugh. But let me tell you, as it stands, if a drive were to fail, I would prefer it to fail under the LSI controller, because:
1) I'll be informed of the situation immediately (unlike FreeNAS right now)
2) Fixing the problem is literally a matter of yank disk, cram new disk in, maybe remember to check on it in a few hours to make sure it had a happy ending (also unlike FreeNAS right now).
But of course we were talking about under FreeNAS, and I don't really have any idea about how that'd work out. I just want to make the point to you that FreeNAS has some things to work out, and with zfsd in FreeBSD 9, I'm hoping they do.