So while I understand that having ZFS on anything but raw disks isn't the best, in this particular configuration it's all I have. The drive backplane and the card are a matched pair, I would pretty much have to scrap the entire thing, of which I paid exactly $0 for, so not really wanting to do that at this time.
Now I can understand the SMART having issues, which it worked before so could jsut be some goofy thing,, but why would ZFS care since I am presenting single disks to it, not any RAID disks? That is really what I am most curious about because from a technical standpoint, ZFS needs to utilize each disk individually, and if I'm presenting each disk individually, then how does that differ from a pure JBOD configuration?
I have disabled all caching on the controller, and each of the 12 disks are set as individual logical disks, so really, from the standpoint of ZFS it's 12 disks near as I can tell. Here's the specs of what I have hardware wise:
Dual Xeon(R) CPU E5-2603 0 @ 1.80GHz 4-core (No Hyperthreading)
128GB DDR-3 RAM
Dual Intel I350 Gigabit Network Connection Quad Port
Intel RMS25CB080 RAID Module
(x2) 100GB Micron P400m100-MTFDDAK SATA 6Gbps SSD [d0,d1]
(x12) 2TB Seagate Constellation ES ST2000NM0011 7.2k SATA 6Gbps HDD [d2 ~ d13]
This is a re-purposed server that was "thrown out" and ended up on my shelf, and with the horsepower I figured would be awesome for FreeNAS. Now if that really is such a show stopper, having 12 RAID 0 drives instead of 12 JBOD drives, then that's interesting and deserves a much deeper dive and discussion, but I'm also going to have to figure out what HBA I can use with the backplane I have to power the 12 drives, plus the two SSD's that I'm using for ZIL and L2ARC.
Otherwise, I will have an ESXi box with an absolute crap ton of local storage to use, which would only be great if I had a couple of these things, 10Gbe networking, and a full vSphere license with vSAN, which none of those exist.