Hello,
since I started with my FreeNAS Project I had these problems. I use this System as Storage for my VMs.
System Specifications:
Mainboard Asrock E3C224-4L
CPU Intel Xeon E3-1231v3
RAM 4x Kingston ValueRAM 8GB DDR3 ECC
Network Chelsio T520-SO-CR
HBA LSI SAS 9305-16i + 4x CBL-SFF8643-06M 0.6m Cab
Case Gooxi RMC3116-670-HSE (12Gbit SAS Backplane)
PSU Seasonic SS-500 (500 Watt)
Drives 10x 2TB SATA (3Gb 5200rpm) (I know that's not the best. The Performance is ok but I will replace the drives in die future)
2x Samsung 930 PRO (ZIL and L2ARC)
2x Kingston SSD (system drives, attached to mainboard SATA Controller)
The smart info of all drives are ok no reallocated sectors. All Self Tests are without any errors.
As far as the system is running with low load there is no problem.
But when I start larger copy Jobs or Migrate VMs from other storage randomly errors on disks occurs.
After restarting the system all counts are zeroed and the system is running regularly.
I hope you can help me to identify the problem, as far as this problem is not solved I can't use the system productively.
since I started with my FreeNAS Project I had these problems. I use this System as Storage for my VMs.
System Specifications:
Mainboard Asrock E3C224-4L
CPU Intel Xeon E3-1231v3
RAM 4x Kingston ValueRAM 8GB DDR3 ECC
Network Chelsio T520-SO-CR
HBA LSI SAS 9305-16i + 4x CBL-SFF8643-06M 0.6m Cab
Case Gooxi RMC3116-670-HSE (12Gbit SAS Backplane)
PSU Seasonic SS-500 (500 Watt)
Drives 10x 2TB SATA (3Gb 5200rpm) (I know that's not the best. The Performance is ok but I will replace the drives in die future)
2x Samsung 930 PRO (ZIL and L2ARC)
2x Kingston SSD (system drives, attached to mainboard SATA Controller)
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:05 with 0 errors on Sun Oct 28 03:45:05 2018
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
errors: No known data errors
pool: zPool
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 0 in 0 days 00:11:46 with 0 errors on Fri Oct 12 23:41:16 2018
config:
NAME STATE READ WRITE CKSUM
zPool DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
gptid/468bd5db-c7cb-11e8-bfe1-000743495cb0 ONLINE 0 0 0
gptid/474a3ba5-c7cb-11e8-bfe1-000743495cb0 FAULTED 3 0 0 too many errors
gptid/4936a5ad-c7cb-11e8-bfe1-000743495cb0 ONLINE 0 0 0
gptid/4b18c4eb-c7cb-11e8-bfe1-000743495cb0 ONLINE 0 0 0
gptid/4cb36de4-c7cb-11e8-bfe1-000743495cb0 ONLINE 0 0 0
raidz2-1 DEGRADED 0 0 0
gptid/296c77da-ce04-11e8-b12d-000743495cb0 ONLINE 0 0 0
gptid/3741d807-ce04-11e8-b12d-000743495cb0 ONLINE 0 0 0
gptid/5793acf8-ce04-11e8-b12d-000743495cb0 ONLINE 0 0 0
gptid/624fc955-ce04-11e8-b12d-000743495cb0 ONLINE 0 0 0
gptid/6d155253-ce04-11e8-b12d-000743495cb0 FAULTED 3 0 0 too many errors
logs
mirror-2 UNAVAIL 0 0 0
da12p1 FAULTED 9 0 0 too many errors
da11p1 FAULTED 6 0 0 too many errors
cache
da11p2 FAULTED 3 0 0 too many errors
da12p2 FAULTED 3 0 0 too many errors
errors: No known data errors
The smart info of all drives are ok no reallocated sectors. All Self Tests are without any errors.
As far as the system is running with low load there is no problem.
But when I start larger copy Jobs or Migrate VMs from other storage randomly errors on disks occurs.
After restarting the system all counts are zeroed and the system is running regularly.
I hope you can help me to identify the problem, as far as this problem is not solved I can't use the system productively.