Hi,
I'm using FreeNAS in my company for ~3 years now.
I have one server with FreeNAS-11.1-U6.3 installed
mirroring OS SSD disks and HD spinning disks for the data.
Last week we had a crysis where the DC got hot and required us to bring down all servers until the heating issue will be resolved.
Let me just add that i'm hosting mysql servers, beegfs storage, and non-branded nfs storage in the same rack.
All servers and services came up fine once the issue resolved. except our FreeNAS that was reporting about a failed disk and filesystem corruption (receiving Input/Output errors).
zfs scrub didn't fix the filesystem issues and all disks are marked as DEGRADED and i have permanent errors that will require me to wipe the whole thing and rebuild it.
I'm telling you this, because in my opinion ZFS is still immature. I'd like to believe that in 2019 filesystems errors shouldn't take down the whole dataset.
I'm using FreeNAS in my company for ~3 years now.
I have one server with FreeNAS-11.1-U6.3 installed
mirroring OS SSD disks and HD spinning disks for the data.
Last week we had a crysis where the DC got hot and required us to bring down all servers until the heating issue will be resolved.
Let me just add that i'm hosting mysql servers, beegfs storage, and non-branded nfs storage in the same rack.
All servers and services came up fine once the issue resolved. except our FreeNAS that was reporting about a failed disk and filesystem corruption (receiving Input/Output errors).
zfs scrub didn't fix the filesystem issues and all disks are marked as DEGRADED and i have permanent errors that will require me to wipe the whole thing and rebuild it.
I'm telling you this, because in my opinion ZFS is still immature. I'd like to believe that in 2019 filesystems errors shouldn't take down the whole dataset.
Code:
pool: nfs state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 392K in 0 days 00:07:52 with 21 errors on Tue Dec 24 17:37:24 2019 config: NAME STATE READ WRITE CKSUM nfs DEGRADED 0 0 25.3K raidz2-0 DEGRADED 0 0 101K gptid/6ca54600-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 2 too many errors gptid/6da1bb59-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 20 too many errors gptid/6eabf943-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 18 too many errors gptid/6fb66a76-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 36 too many errors gptid/70c8b367-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 9 too many errors gptid/71e694dd-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 27 too many errors gptid/7304cac0-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 27 too many errors gptid/2318063a-f491-11e9-83b9-801844f2984a DEGRADED 0 0 0 too many errors gptid/4bd7f20f-e67d-11e9-83b9-801844f2984a DEGRADED 0 0 37 too many errors gptid/761d4c4b-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 3 too many errors gptid/831d0608-2d19-11e9-83b9-801844f2984a DEGRADED 0 0 14 too many errors gptid/7854977f-3bf8-11e8-8ae1-801844f2984a DEGRADED 0 0 10 too many errors errors: Permanent errors have been detected in the following files: nfs/pma:<0x0>