DEGRADED!DEGRADED!DEGRADED!DEGRADED!DEGRADED!DEGRADED!

dengruxi

Dabbler
Joined
Mar 12, 2020
Messages
32
I built freenas with esxi and directly connected the hard disk to freenas through the RMD mode of esxi. After one of the hard disks is damaged and replaced with a new hard disk, all the hard disks are displayed as graded, including the newly purchased hard disk. I'm sure these three hard disks are intact! How to cancel the graded display of status? thank you!


root@nas[~]# zpool status -v
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:07 with 0 errors on Sat Apr 23 03:45:07 2022
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
ada0p2 ONLINE 0 0 0

errors: No known data errors

pool: raid5
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 11:59:01 with 82 errors on Tue Apr 26 01:11:48 2022
config:

NAME STATE READ WRITE CKSUM
raid5 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/82bbcf7f-c09b-11ec-abfa-000c29a65d01 DEGRADED 0 0 1 too many errors
gptid/76b96c6d-684c-11ea-bc57-d05099d28fdf DEGRADED 0 0 1 too many errors
gptid/6b0d28cd-6413-11ea-9ee9-a0369f581100 DEGRADED 0 0 1 too many errors

errors: Permanent errors have been detected in the following files:

/var/db/system/rrd-1d75fdc729874c0cb54b5da2552ea8c9/localhost/df-mnt-raid5-ss/df_complex-used.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc/cache_ratio-L2.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc/cache_ratio-arc.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_arc-misses.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_data-demand_data_hits.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_data-demand_data_misses.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_data-prefetch_data_hits.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_metadata-demand_metadata_hits.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_metadata-demand_metadata_misses.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_metadata-prefetch_metadata_hits.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_metadata-prefetch_metadata_misses.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_mu-mfu_hits.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/zfs_arc_v2/arcstat_ratio_mu-mru_ghost_hits.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-0/cpu-idle.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-0/cpu-interrupt.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-0/cpu-nice.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-0/cpu-system.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-0/cpu-user.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-1/cpu-interrupt.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-1/cpu-nice.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-1/cpu-system.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-1/cpu-user.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-2/cpu-idle.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-2/cpu-interrupt.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-2/cpu-nice.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-2/cpu-system.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-3/cpu-idle.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-3/cpu-interrupt.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-3/cpu-nice.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-3/cpu-system.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-3/cpu-user.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-4/cpu-idle.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-4/cpu-nice.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-4/cpu-system.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-4/cpu-user.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-5/cpu-interrupt.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/cpu-5/cpu-system.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/interface-vnet0.1/if_octets.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/interface-vnet0.1/if_packets.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/interface-vnet0.1/if_errors.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/interface-bridge0/if_octets.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/interface-bridge0/if_errors.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/interface-bridge0/if_packets.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/df-mnt-raid5-cs/df_complex-free.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/processes/ps_state-sleeping.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/processes/ps_state-stopped.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/df-mnt-raid5-ss/df_complex-used.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/processes/ps_state-wait.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/processes/ps_state-zombies.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/rrdcached/gauge-tree_depth.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/rrdcached/gauge-tree_nodes.rrd
raid5/.system/rrd-1d75fdc729874c0cb54b5da2552ea8c9@auto-2021-10-29_19-00:/localhost/rrdcached/operations-receive-flush.rrd
raid5/.system/rrd-1d75fdc7
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I built freenas with esxi and directly connected the hard disk to freenas through the RMD mode of esxi.

That's super-dangerous and specifically advised against. See for ex.


Please rebuild your system following the guidance at


Once you've built a proper server, you can migrate your data onto it. You can expect future problems if you do not.

You can try clearing the error state, but that is only a temporary fix before the next misadventure. As it is, your system volume has corrupt files in it, and, quite frankly, you should be counting your lucky stars that the damage seems limited to that. We've had lots of people lose their entire pool from using RDM. Do Not Do That.
 

dengruxi

Dabbler
Joined
Mar 12, 2020
Messages
32
谢谢你。我意识到问题的严重性!如何清除错误状态?我会尽快放弃esxi建设

这是超级危险的,特别建议不要这样做。见前。


请按照以下指南重建系统

https://www.truenas.com/community/t...星,损坏似乎仅限于此。我们有很多人因为使用 RDM 而失去了他们的全部资源。不要那样做。
 
Top