Or would you say ESXi and freenas is not a good combination at all, use a different kind of storage for that.
Depending on the cost and time you are willing to invest, this may be true. Many people are unprepared for the amount of time and/or money involved in building a good working server to host dozens of ESXi VMs. I've done contract work with many people and some of them ran 30-50VMs(mostly service functions like AD, email, etc.) and had no performance problems with a server that cost around $5k. The number of users wasn't a good metric for that individual because the services weren't really setup for a number of users. The workload from day to day within that company was the biggest factor. The person with 30-50VMs was not able to find a limit except that he could saturate all 4x1Gb links simultaneously, so he was very happy to have paid me for a few hours of my time.
The thing to keep in mind is that you need to build the system to work for your desired workload. Set some boundaries for what you are expecting and how far you are willing to go. For one user, 8GB sticks of RAM might be a good start, for others, 32GB sticks were the starting point. You have to look at many factors and put a kind of idea of what you think the absolute minimum you are likely to need, what you are expected to need, then what you might need for heavy load. Then, based on that, stop and do hardware shopping. Generally, once you want to run more than 1 or 2 VMs you are talking major performance penalties if you don't design the system around running VMs. And the second you decide to design the system around running VMs, the cost increase rapidly before tapering off
There are 2 kinds of people that regular show up here with needs/wants that involve VMs.
1. Like I said in the previous paragraph, the cost increases rapidly when you want to do more than 1 or 2 VMs. People get downright pissed when the build a $1500 server and then can't do 2 VMs with good speed. Many spend weeks or months with an underpowered and unacceptable hardware setup and try to tweak it to work. Almost nobody ever ends up happy. It's extremely frustrating for someone that doesn't know what they are doing and just wants the end product to work. For the actual financial costs of more than 1-2 VMs it's a line that once crossed means higher costs that most don't want and will try to tweak it to work. 99% give up in agony and frustration because there's no silver bullet to cover wide ranges of situations. These people often accept their crappy performance, bail on ZFS as a solution, or take very dangerous(for their data) options like setting sync=disabled.
2. Others already know they want to run dozen(s) of VMs and are looking at building servers with a purpose. At this point, most people find it just faster and easier to get help from someone like me that does contract work, or just walks away from ZFS when they read the dozens of threads that give no answers to the problems.
Normally, when someone wants to do VMs, I tell them not to consider <64GB of RAM, expect to need a ZIL and L2ARC appropriate for your server specs(note that you can have too much of both, so right-sizing is somewhat important otherwise you spend money on resource that are unusable) and to expect mirrored pairs for whatever total disk space you want. There's just no building it for "cheap". I've done builds that cost as little as $3000 and as much as $6-8k for the hardware. They almost always have some hardware to reuse, that helps the cost. A simple 4U 24 drive server case is an instant $1k, so just reusing a few parts can be a very big cost savings.
Now you see why I do consultation services.. too many people would rather pay someone some cash to give them a parts list that is appropriate for their need than try to play these games of buying too much or not enough hardware. In almost every case where the owner tried to do it himself and then came to me he spent more than $1000 on hardware that was unusable for his server and was literally shelved for future use. They don't like hearing that, but the solution to the problem isn't like the windows world... you can't just keep throwing more hardware at it. It's important that you throw the *right* hardware at it.