Well, the flip side to this whole thing is, the reality of it all is that VMware has been obstinate about the issue and refuses to allow it to be configured/controlled by the virtualization host. This leaves administrators confused and desperate to "fix the slow storage." I am not unsympathetic to the issue.
Further, a whole generation has been taught that the way to "fix" this is to
disable NFS sync, without an explanation as to why, or more specifically, why this is bad. And let's face it, the issue is a bit abstract, and the likely failure modes involve an interaction (vmdk disk blocks "written" but not committed) that might be unimportant to a VM or might be totally train wreck. VMware at least understands that they do not know the importance of the data and therefore they treat it all as important.
Which kind of sucks unless you design for it. So let's consider the non-FreeNAS angle.
On a local disk based datastore, M1015-in-IR mode with two Momentus XT's in RAID1, write performance ends up being about 94MBytes/sec sequential, read performance around 100MB/sec.
On a local SSD based datastore, M1015-in-IR mode with some various SATA3 SSD's in RAID1, write performance ends up being about 262MB/sec, read around 232MB/sec.
However, to achieve those speeds, I have to be writing 1MB chunks. Writing smaller 8KB chunks results in a fraction of the speed, because of the latency in sending things from the VM, through the vmfs layer, through the controller, and then actually getting the acknowledgement back from the datastore that it has been written. 17.7MB/sec for the disks.
Now, I can add a BBU write cache based RAID controller to the host, and while I don't have one handy to play with, I can tell you that the numbers for writes in general tend to be higher, and there's a much less noticeable decrease in speed based on chunk size, because the RAID controller is short-circuiting the sync writes. And this is still the heart of the matter. It is a problem even with locally attached storage, it is just not as significant an issue due to the low latency of local storage.
But local host storage is at least somewhat inconvenient. The ability to shuffle VM's around between hosts makes shared network storage attractive. And technologies such as FC can be expensive and challenging for a small IT department to deploy, and often kind of energy-hungry. NFS is common and readily available in a huge selection of footprints.
So here it becomes important to properly resource your NAS environment. Really, if you're going to spend money on network storage for your VM's, anywhere from many hundreds to several thousands of dollars most likely, refusing to install a ZIL is kind of like sticking 4GB of RAM in an i386 desktop box from 2005 with a Realtek 10/100 ethernet and then wondering
WHY IS THIS SO SUCKY??!?
The latency inherent in going over the network for NFS is always going to make NFS (or iSCSI) less attractive than a well-designed local datastore. So even if you run your NFS on a md memory disk on 10GbE, you may not see earthshaking performance.
In the end? You either understand that there are overall limits to the technology or you don't. If you choose to use sync=disabled rather than getting a ZIL, you need to realize the risks both to your VM's and to your ZFS pool. If you want to be comfortable that your pool is safe but don't mind some risk to your VM's, use iSCSI with sync=standard. The ultimate solutions, at least under FreeNAS, involve actually implementing the technology to be able to quickly commit data. In the ZFS paradigm, that's a ZIL.