Strangely, I replied yesterday morning, but it's not here.
So what's the odds of 2 disks failing in short order? It depends. If you setup FreeNAS, then never checked your disk status again for a year there's a very good chance one disk started failing months ago and you didn't know. This scenario has occurred to many people that build the server and then don't touch it again. Everything is fine until you lose enough redundancy, then it's game over and the server owner is left crying and upset because he lost his irreplaceable data and had no backup.
Here's what I'm getting from the command outputs...
zpool import being blank - there is no zpool data on any of the disks the system is mounting(or the zpool metadata is corrupted beyond recognition)...currently only 1 disk, ada0
gpart status - Your GPT partitions are good, but again, currently only 1 disk, ada0(da0 is your USB stick)
zpool status - No mounted pools (duh?)
camcontrol devlist - only 1 disk was detected by the system aside from your USB stick
I tend to think there's a more complex situation than what you may think/believe.
If the 2TB drive had been part of a zpool then the zpool import command would have shown something, even a broken and unmountable pool. It didn't. So I tend to think that the 2TB had no zpool, the zpool was deleted at some point in the past, or the zpool metadata is corrupted beyond the point of it being recognized as a zpool device.
Considering that only 1 disk is detected I tend to think that the other disk is failed since you said that the system has been in use for a while. Based on the output from the other commands it looks like your 1TB had all of your data and the 2TB disk was just using electricity but doing nothing useful for you.
You can try contacting one of those data recovery pros, and for a few thousand dollars they may be able to replace your disk's components and mail you back the drive so you can mount the zpool again. If they won't mail you back a "working disk" with the same platters you have almost no recovery chance whatsoever. Nobody does ZFS recovery except for very large sums of money (think $10k+) and nobody has documented actual data recovery paying for it.
To be quite honest, I think your data is done for. I don't even have ideas for commands to run to help recover your data because there are no zpools available to the system(if there had been then a zpool import would have shown the pools). Generally the problem with ZFS is finding enough disks to mount all of the vdevs or corruption causing the zpool to be unmountable. In your case you have no zpools identified.. at all.
Also, your custom tuning(in particular kmem size) should never have been done. You got a warning that it wasn't recommended, and you should have heeded that warning. I don't have solid evidence that it is the reason for your problems, but when you start getting warnings from the kernel, those are things that shouldn't be ignored unless you are a FreeBSD expert. It could be that you starved your system and it caused corruption leading to your current situation. I just don't know.
ZFS recommends 6GB minimum, which you definitely don't have more than 4GB since you are choosing to use the x86 version.
Overall, lots of reasons why the entire design was a bad choice... insufficient hardware, improper tuning, and you don't appear to have utilized any redundancy available with ZFS on FreeNAS.
Sorry, but I got no further ideas for you. :(
So what's the odds of 2 disks failing in short order? It depends. If you setup FreeNAS, then never checked your disk status again for a year there's a very good chance one disk started failing months ago and you didn't know. This scenario has occurred to many people that build the server and then don't touch it again. Everything is fine until you lose enough redundancy, then it's game over and the server owner is left crying and upset because he lost his irreplaceable data and had no backup.
Here's what I'm getting from the command outputs...
zpool import being blank - there is no zpool data on any of the disks the system is mounting(or the zpool metadata is corrupted beyond recognition)...currently only 1 disk, ada0
gpart status - Your GPT partitions are good, but again, currently only 1 disk, ada0(da0 is your USB stick)
zpool status - No mounted pools (duh?)
camcontrol devlist - only 1 disk was detected by the system aside from your USB stick
I tend to think there's a more complex situation than what you may think/believe.
If the 2TB drive had been part of a zpool then the zpool import command would have shown something, even a broken and unmountable pool. It didn't. So I tend to think that the 2TB had no zpool, the zpool was deleted at some point in the past, or the zpool metadata is corrupted beyond the point of it being recognized as a zpool device.
Considering that only 1 disk is detected I tend to think that the other disk is failed since you said that the system has been in use for a while. Based on the output from the other commands it looks like your 1TB had all of your data and the 2TB disk was just using electricity but doing nothing useful for you.
You can try contacting one of those data recovery pros, and for a few thousand dollars they may be able to replace your disk's components and mail you back the drive so you can mount the zpool again. If they won't mail you back a "working disk" with the same platters you have almost no recovery chance whatsoever. Nobody does ZFS recovery except for very large sums of money (think $10k+) and nobody has documented actual data recovery paying for it.
To be quite honest, I think your data is done for. I don't even have ideas for commands to run to help recover your data because there are no zpools available to the system(if there had been then a zpool import would have shown the pools). Generally the problem with ZFS is finding enough disks to mount all of the vdevs or corruption causing the zpool to be unmountable. In your case you have no zpools identified.. at all.
Also, your custom tuning(in particular kmem size) should never have been done. You got a warning that it wasn't recommended, and you should have heeded that warning. I don't have solid evidence that it is the reason for your problems, but when you start getting warnings from the kernel, those are things that shouldn't be ignored unless you are a FreeBSD expert. It could be that you starved your system and it caused corruption leading to your current situation. I just don't know.
ZFS recommends 6GB minimum, which you definitely don't have more than 4GB since you are choosing to use the x86 version.
Overall, lots of reasons why the entire design was a bad choice... insufficient hardware, improper tuning, and you don't appear to have utilized any redundancy available with ZFS on FreeNAS.
Sorry, but I got no further ideas for you. :(