Just a little bit. This actually gets back to the problem we were tackling in the original thread. See, a cluster-aware filesystem has to avoid doing certain things, especially including assuming that certain stuff like metadata in its local cache is up-to-date.
VMware's VMFS is kind of a specialized bastard hybrid because it mainly needs to do one thing well, which is to provide exclusive access to a file resource to a single hypervisor. So what happens is that when you spin up VM 3 on Host B, VMFS locks that resource and makes it so that other hypervisors cannot also spin up that VM or access its data. This means that Host B knows without a doubt that all the blocks for the vmdk for VM 3 are its to play with, and that it need not concern itself with any cluster-awareness for all the blocks that comprise the VM 3 vmdk. Meanwhile VM 4 over on Host A is accessing nearby blocks on the same VMFS volume because those were allocated to VM 4, and VM 9 is happily running on Host C.
There isn't any actual block-level cluster access management going on for the blocks within those vmdk's, and so this means that your ESXi host can do things like local SSD caching ("Flash Read Cache") of the VM data because no other ESXi host would have updated the data in those blocks. This would normally be a problem with a generic cluster-aware filesystem, because you can't really trust local caches of data if you don't know what has been touched on disk.
What VMFS mainly has to worry about is managing its own metadata (for the VMFS filesystem). That DOES need to be fully cluster-aware, so that multiple hosts manipulating it are not negatively affecting each other. If you try to create two new VM's, one on each host, at exactly the same time, for example, both of them start reading the VMFS root directory at the same time for a place to add "MyNewVM-1" and "MyNewVM-2". Without some form of cooperative locking and access, both of them would find the first free space in the directory, create an entry, and write it back to storage .... and that'd be in the same spot. And both would try to allocate the subdirectory and free space for their VM's and probably do THAT in the same space as well. That's be a catastrophe. So that is instead done cooperatively, so that "MyNewVM-1" gets put in root directory slot 8 and "MyNewVM-2" gets put in root directory slot 9, and the free blocks are cooperatively allocated, and "/MyNewVM-1" is at block 10,000 and "/MyNewVM-2" is at block 10,020. And "MyNewVM-1.vmdk" is allocated from block 10,030 to 10,429, while "MyNewVM-2.vmdk" is allocated from 10,430 to 10,829.
The problem is that VMFS has to signal all the locking for those metadata updates through the SAN filer, which is basically fairly slow. However, once all that is done, and the VM's are spun up, they do not need to worry about further locking updates, and the VM's are basically just allowed to write whatever to their vmdk files, as though they were actual disks.
If you're still confused, please squeak.