larencio88
Dabbler
- Joined
- May 5, 2016
- Messages
- 13
Interesting, I ran the command and here is the output.Thank you. ;) The phenomenon is torn writes with write amplification. Simply put, if I write one 512 byte sector to an AF disk, 8 sectors are at risk of loss if that write fails. Same if a 4K write is unaligned, except 16 sectors at risk. Another scenario is a drive that automatically claims completion of such writes to avoid exposing the actual latency, in the expectation that more will follow to the same track. There are three makes in this pool.
Ashift should be viewable byzdb -e tank | grep ashift
I'm not sure that is a real ZFS I/O error. In any case, even if there was an I/O error, the theory is there should be no corruption to the pool. But if there was an I/O error and corruption, then there is probably a problem in the code, even if the window of exposure is merely 'during a shutdown'.
Code:
root@freenas ~]# zdb -e tank | grep ashift ashift: 12 ashift: 12 ashift: 12 ashift: 12 ashift: 12 ashift: 12 ashift: 12 ashift: 12 loading space map for vdev 3 of 4, metaslab 183 of 349 ... 36.2M completed ( 0MB/s) estimated time remaining: 11239hr 16min 27sec
the first 2 vdev's are on the onboard lsi2308 controller, the third vdev is 2/3 on there and 1/3 on the intel motherboard controller. It seems to be only showing the 8 disks on the lsi2308 controller so far.