Well, no, the problem is that we don't know.
ZFS assumes the host computing platform is trustworthy. ZFS caches a lot of data in RAM.
Now what CAN happen is that bits can rot while in memory, and then if ZFS takes that block, makes an update, and pushes it out to the pool, it'll even look like a valid block because a new checksum will have been calculated for it.
If that's a file data block, then, yes, the file data is corrupted, but that's pretty much the end of it.
The real nightmare is if it is pool metadata - which is strongly likely to remain cached in RAM. So I'm going to give you a trivial hypothetical scenario here, and I need you to understand that this is admittedly contrived. Don't argue the point, listen to the logic behind it. So you have a pool metadata block that holds the free block list. It showed that block 123 was allocated. Now as it happens, that held inode 4's data (that's the root directory inode). A RAM error flips that block from "allocated" to "free". A subsequent write flushes that out to disk. Nothing seems to be wrong. You continue to safely fill the disk with data, because that block is one lonely block and ZFS likes to allocate contiguous ranges of blocks. You fill the pool, 60, 70, 80% full... then one day the pool has sufficiently few blocks on it that ZFS "allocates" block 123 for file data. Data gets written.
Suddenly every frickin' file in your ZFS pool is "gone", because inode 4 got stomped, and inode 4 was the linchpin for the whole damn filesystem.
Conventional filesystems have utilities to help detect badness. FFS calls it "fsck" and NTFS calls it "chkdsk". These can't always detect or fix badness, but they're a necessary evil because hard disks do develop bad blocks and this is their recovery strategy.
ZFS's recovery strategy is to be able to use checksums to identify when data has rotted, and then to be able to pull that data from redundancy. ZFS should never NEED a fsck utility - pool blocks are not supposed to go bad. And ZFS is supposed to be able to reliably detect and correct hard drive blocks that have gone bad.
By running ZFS on a non-ECC system, you eliminate the "reliably detect" capability and introduce new opportunity for undetected corruption.
So ... the problem. There's a good chance no permanent damage was done to your pool. Unfortunately, there's also a reasonably large chance that some damage WAS done to your pool, and it is also possible that undetectable damage was done to the pool. Damage to the pool might come back someday to haunt you, as I outlined above. Or it might not.
The safe thing to do, at this point, if you care about the data, is to use some non-ZFS-replication method to copy all the data off the pool, destroy the pool, recreate the pool, and then reload your pool. This gives you a known good state for the metadata. Any damage done to the file data has already been done, and is up to you to find that on your own.
There have been various arguments about how likely the various factors in all of this actually are. It is like debating how many angels can dance on the head of a pin. I'm just telling you what I know the possibilities to be.