That's not necessarily true. ZFS does attempt to correct errors when a read results in a corrupt block.
But that relies on a bad HDD. I was really referring to where the disks are working fine, and the RAM is the only thing malfunctioning. OP posted his SMART status and there is no real reason at this time to think his HDDs are getting any errors.
If ZFS detects a block being read doesn't match the checksum, but the disk didn't return a hard read error, then ZFS will only repair the block in memory so it can serve the user a "repaired" copy of that block. It doesn't actually edit the block on-disk in a case such as this.
A read will only cause ZFS to overwrite a block on the disk if the read is accompanied by a URE.
Note that when I say overwrite, I mean write a new block and update the block pointer to that new block.
A scrub can update blocks on-disk, but only if ZFS knows that it found and verified an un-corrupted "copy" of the block to replace the block it thinks is corrupt with.
It works like this. Scrub reads a block into RAM, then it calculates the checksum and compares it with the checksum ZFS stored back when the block was originally written. Say the RAM is bad and the RAM either corrupted the block as it was written into RAM, or it corrupted the checksum as it was written into the RAM. Either way, when ZFS compares the two it will say there is a cksum error since they don't match (even though the block on the disk is actually fine). ZFS will log this cksum error to the zpool status.
Now it will attempt to repair the data. First it reads in the parity or mirror data for the block and from that, it reconstructs a redundant "copy" of the block. It also reads in the checksum for this redundant copy of the block that was stored when the mirror or parity blocks were first created. Now it compares the checksum it just now computed for the redundant block, to the checksum that was there from when it was originally stored.
Two things can happen here. Either this redundant block copy was read into good RAM this time and ZFS finds that it matches its original checksum (thus verifying the redundant copy is "good"), and then it now replaces the original bad data block with this new and verified good block.
Or, when ZFS reads in the redundant copy block and redundant block's checksum, the bad RAM also corrupts one of these. This time when ZFS compares the two, it will see that it also doesn't match. ZFS will in this case so far not overwrite anything on-disk yet, because it didn't find a verified good copy to use yet. ZFS then checks the next redundant copy (higher n-way mirror or RAIDZ2 or Z3).
If all these additional copies also get corrupted as they are written to RAM, then ZFS will ultimately abort the repair operation for that block and it will not overwrite anything on-disk. It will then report that there are corrupt files in the pool that it cannot repair. But in reality, the original block is still on the disk and is in fact not corrupt even though ZFS thinks it is.