That graph is horrible. Any chance they are working on a fix for this issue? That could be a potential show stopper for clients.
No, there isn't a "fix" for this, other than to go SSD. Once you start making hard drives seek, you are very much married to the mechanical speed of the drives, and the more you seek, the worse it gets.
ZFS is actually very good about this because it works hard to allocate blocks contiguously, so even if you're writing to random blocks in random files, the transaction group will tend to get written as contiguous blocks (i.e. no seeks) as long as contiguous space exists on the disk. The problem is that as you decimate the availability of contiguous space, performance invariably has to fall off. Once you get to the point where a transaction group cannot be written contiguously, you see falloff.
But if it helps make you feel better, do consider that if you're writing random blocks on a non-ZFS filesystem like UFS or NTFS, you *START OUT* at a low speed because you're seeking all over the place to write those blocks "in place."
And it never gets better. So this is more an example of how ZFS can make certain use models really shine performance-wise compared to conventional models.
The standard ZFS mitigation techniques to cope with fragmentation involve throwing hardware at the problem. To reduce the problem of slow read performance, we throw lots of ARC and L2ARC at the issue. This only helps the stuff that's read frequently enough that it comes to reside in {,L2}ARC, of course. To reduce the problem of slow write performance, we throw lots of extra disk space at it, so that ZFS can maintain larger contiguous free spaces. This sometimes helps read performance as well, but it depends on what the write access patterns for the written data are.
But fundamentally you hit disk drive seeks as a limiting performance factor at some point.