Good Morning,
strange problem with the ui.
Did the following:
Tonight I got a mail:
Looking on the webui gives just "There are errors" under the red button.
Clicked on "pool" and the ui broke.
When i now open the webui there is just:
some
Services (afp, nfs, ssh) are working as normal.
Already searched about that, reboot should fix it. For debugging purposes I didn't reboot yet and would like to replace the faulty drive first.
Anything I should add?
HW is SM board with a e3 1231 v3, 16gb ddr3 ecc ram, lsi16i hba, mirrored ssd boot device and 2 vdevs each 8x 2TB in raidz2[/CODE]
strange problem with the ui.
Did the following:
Tonight I got a mail:
Code:
The volume xx (ZFS) state is DEGRADED: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state.
Looking on the webui gives just "There are errors" under the red button.
Clicked on "pool" and the ui broke.
When i now open the webui there is just:
Code:
An error occurred. Sorry, the page you are looking for is currently unavailable. Please try again later. If you are the system administrator of this resource then you should check the error log for details. Faithfully yours, nginx.
some
/var/log/messages
output:Code:
Jan 26 06:27:01 xx swap_pager: I/O error - pagein failed; blkno 12583073,size 12288, error 6 Jan 26 06:27:01 xx vm_fault: pager read error, pid 2731 (python2.7) Jan 26 06:27:01 xx swap_pager: I/O error - pagein failed; blkno 12582957,size 4096, error 6 Jan 26 06:27:01 xx vm_fault: pager read error, pid 2731 (python2.7) Jan 26 06:27:01 xx kernel: Failed to write core file for process python2.7 (error 14) Jan 26 06:27:01 xx kernel: Failed to write core file for process python2.7 (error 14) Jan 26 06:27:01 xx kernel: pid 2731 (python2.7), uid 0: exited on signal 11
Services (afp, nfs, ssh) are working as normal.
zpool status
Code:
pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jan 8 15:37:07 2017 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 ada1p2 ONLINE 0 0 0 errors: No known data errors pool: xx state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 11h22m with 0 errors on Sun Jan 22 01:58:14 2017 config: NAME STATE READ WRITE CKSUM xx DEGRADED 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/9fc4b38a-db17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/a0d212bf-db17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/a2bc5248-db17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/a512edd4-db17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/a759705c-db17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/a89f7799-db17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/a95b787b-db17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/cbb4b350-dc17-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 raidz2-1 DEGRADED 0 0 0 gptid/424b2a29-dcd9-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/43405eaf-dcd9-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/444b369b-dcd9-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/0ea8a1e2-dfba-11e6-8b3d-001e67bb79ae FAULTED 0 4 0 too many errors gptid/476ecf7f-dcd9-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/48360f40-dcd9-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/49c75e91-dcd9-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 gptid/4bb5b5d7-dcd9-11e6-8b3d-001e67bb79ae ONLINE 0 0 0 errors: No known data errors
Already searched about that, reboot should fix it. For debugging purposes I didn't reboot yet and would like to replace the faulty drive first.
Anything I should add?
HW is SM board with a e3 1231 v3, 16gb ddr3 ecc ram, lsi16i hba, mirrored ssd boot device and 2 vdevs each 8x 2TB in raidz2[/CODE]