allegiance
Explorer
- Joined
- Jan 4, 2013
- Messages
- 53
I have a FreeNAS-11.2-U7 server that I've upgraded, maintained and expanded for many years now (about 10-11 years, I think). It is used for backups, it's main pool is about 155TiB. Recently - since upgrading to FreeNAS 11.2, I think - I've noticed that when I replace a drive via the GUI, it spontaneously reboots a few minutes into the resilver. This has happened twice before this most recent time, so I am just noticing it as a pattern (I don't have to replace drives often, thankfully). The first two times this happened, it finished resilvering after the reboot. This time, however, it seems to no longer be trying to resilver. Here is my output for zpool status:
I replaced the failing drive about 6 weeks ago, but it has not shown a resilvering status since it crashed/rebooted. I tried doing a clean reboot or two, still nothing. I've tried a manual scrub (almost done), but I don't think that will do it since it had done a scheduled scrub since the replacement/crash already. Since this is a backup, I do not have another backup of this backed-up data, except for the live data. Due to the size of the zpool, I would really prefer not to lose it, but it is not irreplaceable data.
BTW, the status: One or more devices are configured to use a non-native block size. issue has been ongoing since the early days of this machine and I have sought help getting that resolved in the past to no avail. But, that is not my issue today - I don't think it is related.
Any help or insight would be greatly appreciated. I am by no means a FreeNAS/TrueNAS expert, but have kept this machine running for all these years with minimal issues until now, so don't know what to do. Thank you.
Code:
pool: Shared state: DEGRADED status: One or more devices are configured to use a non-native block size. Expect reduced performance. action: Replace affected devices with devices that support the configured block size, or migrate data to a properly configured pool. scan: scrub in progress since Fri Nov 12 17:22:03 2021 209T scanned at 430M/s, 204T issued at 421M/s, 219T total 2.47M repaired, 93.45% done, 0 days 09:54:24 to go config: NAME STATE READ WRITE CKSUM Shared DEGRADED 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/61876d86-6ab4-11e9-abc8-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/481d3d50-5f82-11e9-abc8-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/6da2a48b-5ba0-11e9-abc8-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/fb6e3c21-57d9-11e9-abc8-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/f45803e2-14a3-11ec-90e3-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/71180d04-7724-11e9-abc8-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/5faf0701-b217-11e9-a490-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/792de98a-71ca-11e9-abc8-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/4b3ee8df-b54b-11e9-a490-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native gptid/6367b9f7-ae3f-11e9-a490-18a9055a8b30 ONLINE 0 0 0 block size: 512B configured, 4096B native raidz2-1 DEGRADED 0 0 0 gptid/8f0c2427-caac-11ea-ad27-18a9055a8b30 ONLINE 0 0 0 gptid/4e962852-c77f-11ea-ad27-18a9055a8b30 ONLINE 0 0 0 gptid/9e7ee4e2-c530-11ea-ad27-18a9055a8b30 ONLINE 0 0 0 gptid/007a3726-c2c2-11ea-ad27-18a9055a8b30 ONLINE 0 0 0 replacing-4 UNAVAIL 0 0 0 1655575827539873983 UNAVAIL 0 0 0 was /dev/gptid/f34f8cd2-bf96-11ea-ad27-18a9055a8b30 3991861371267392492 UNAVAIL 0 0 0 was /dev/gptid/0c460337-22fc-11ec-9e27-18a9055a8b30 gptid/b4d0391e-bd45-11ea-ad27-18a9055a8b30 ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 gptid/59ac93fd-0cd4-11ec-b252-18a9055a8b30 ONLINE 0 0 0 gptid/7a064ba8-3b0a-11eb-bba8-18a9055a8b30 ONLINE 0 0 0 gptid/b32b8abd-3665-11eb-b43b-18a9055a8b30 ONLINE 0 0 0 gptid/6a89f669-34cd-11eb-a8bf-18a9055a8b30 ONLINE 0 0 0 gptid/f01932e6-4177-11eb-bd9c-18a9055a8b30 ONLINE 0 0 0 gptid/ef1e9517-3fd6-11eb-bb7f-18a9055a8b30 ONLINE 0 0 0 raidz2-3 ONLINE 0 0 0 gptid/8baa27ad-db7f-11eb-b116-18a9055a8b30 ONLINE 0 0 0 gptid/bf452066-dda8-11eb-b116-18a9055a8b30 ONLINE 0 0 0 gptid/5a59b7a1-df5d-11eb-ba8a-18a9055a8b30 ONLINE 0 0 0 gptid/2c849e7b-e0ea-11eb-ba8a-18a9055a8b30 ONLINE 0 0 0 gptid/34ef5940-e33f-11eb-ba8a-18a9055a8b30 ONLINE 0 0 0 gptid/06228903-e4cc-11eb-8116-18a9055a8b30 ONLINE 0 0 0 raidz2-4 ONLINE 0 0 0 gptid/2b1dbdb8-7287-11e8-baf8-18a9055a8b30 ONLINE 0 0 0 gptid/5556d8e4-f351-11e8-ac1b-18a9055a8b30 ONLINE 0 0 0 gptid/2d80e50b-7287-11e8-baf8-18a9055a8b30 ONLINE 0 0 0 gptid/2eb2e6d4-7287-11e8-baf8-18a9055a8b30 ONLINE 0 0 0 gptid/2fe3ad53-7287-11e8-baf8-18a9055a8b30 ONLINE 0 0 0 gptid/313ff569-7287-11e8-baf8-18a9055a8b30 ONLINE 0 0 0 raidz2-5 ONLINE 0 0 0 gptid/c038ebf2-ffc7-11e8-b344-18a9055a8b30 ONLINE 0 0 0 gptid/c17eb943-ffc7-11e8-b344-18a9055a8b30 ONLINE 0 0 0 gptid/c2bfb792-ffc7-11e8-b344-18a9055a8b30 ONLINE 0 0 0 gptid/c408b336-ffc7-11e8-b344-18a9055a8b30 ONLINE 0 0 0 gptid/c54d4d84-ffc7-11e8-b344-18a9055a8b30 ONLINE 0 0 0 gptid/c69be2ec-ffc7-11e8-b344-18a9055a8b30 ONLINE 0 0 0 errors: No known data errors
I replaced the failing drive about 6 weeks ago, but it has not shown a resilvering status since it crashed/rebooted. I tried doing a clean reboot or two, still nothing. I've tried a manual scrub (almost done), but I don't think that will do it since it had done a scheduled scrub since the replacement/crash already. Since this is a backup, I do not have another backup of this backed-up data, except for the live data. Due to the size of the zpool, I would really prefer not to lose it, but it is not irreplaceable data.
BTW, the status: One or more devices are configured to use a non-native block size. issue has been ongoing since the early days of this machine and I have sought help getting that resolved in the past to no avail. But, that is not my issue today - I don't think it is related.
Any help or insight would be greatly appreciated. I am by no means a FreeNAS/TrueNAS expert, but have kept this machine running for all these years with minimal issues until now, so don't know what to do. Thank you.