Also, it doesn't hurt to reiterate this, straight from the btrfs wiki:
In other words, btrfs doesn't actually work if a drive fails in a mirrored setup. And it doesn't work at all with RAID5/6
and is subject to a write hole.
But of course, rampant instability and lack of proper design decisions is the hallmark of a good product! Nobody wants a stable, reliable filesystem that protects data.
It'd be nice, but it's not very compatible with the design of ZFS. All things considered, it's an acceptable trade-off.
random bug fixes for enterprise that will never be useful to most people using FreeNAS
Sure, nobody but enterprise will ever use any of the following features, either recently added or under development (off the top of my head, there's plenty more):
- Compressed ARC/L2ARC
- Compressed send/recv
- Native encryption (including send/recv of encrypted text, suddenly transforming any ZFS backup solution into an easy way to have encrypted cloud backups)
- Persistent L2ARC (this one is very enterprise-centric)
Going by that logic ZFS on FreeNAS wouldn't be considered stable either. You're always going to have bugs and issues.
ZFS actually works, btrfs doesnt, as the btrfs wiki neatly explains. Here's a non-comprehensive things that are standard on ZFS and cause no trouble but break btrfs:
- Drive failures (these render btrfs volumes unusable, even if redundancy still exists)
- Parity RAID (RAID5/6-like setup)
- Quotas
- Compression (if there's damage on a compressed disk, btrfs may crash)
- Disk replacements:
Leaving aside the important detail that Google finds ~3 times more results for "btrfs cannot import volume" than for "zfs cannot import volume", most cases of unimportable ZFS pools are caused by negligence. Nothing will save your data if the only copy of it dies along with a failing disk.