You're looking at purchase price, not TCO. 146GB drives are FAR from free.
First, a lot of those used drives on eBay are worn out... look for a NetApp label on many of them. They'll have 30K+ hours on them. I tried doing exactly this (sourced from a friend, not off eBay) and I spent a lot of time resilvering. Keep in mind that performance will suck mightily when the drive dies, and still suck quite a bit during resilvering. With 98 drives running, your array will spend more time degraded than it will happy.
Second, power consumption. Referring to the product manual of a fairly typical Seagate Cheetah 15K drive:
http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/cheetah/15K.5/SAS/100384784c.pdf (page 33) we find an idle power of 10.33 watts and a full load power of 12 watts. Let's figure 11 watts for an average, or 37.53 BTU/hr. With 100 drives, you're talking 3,753 BTU/hr generated by the drives themselves, plus, assuming a 90% efficient power supply, another 375 BTU/hr. In round numbers, about 4,100 BTU/hr, or a bit over 1/3 ton of AC. You're also consuming on the order of 1.2KW of power, or about 10.5MWh/year... at a fairly average $0.09/kWh, that's $946 in electricity just to power the thing... easily double that (some say triple) to account for the AC load. And that doesn't account for the servers themselves (CPUs, etc.), chassis, fans, etc.
You're also going to consume, what, 16U for all this?
So, is power and aircon free? Rack space free? What is the cost when the array rebuilds over and over again and encounters an error? What will the impact be of poor performance when a drive is dying or the array being rebuilt?
You're talking about RAID-Z3. Assuming 9 11-drive vdevs, you wind up with a whopping 7.5TB of usable space, with the total IOPS of 9 drives (9*150=1,350). RAID-Z isn't recommended for VM filestores - I'm not sure how you're running your VPS stuff, but this may be a problem for you.
Look at a system similar to the one in my signature. You can install 36 drives into hot-swap bays, plus 4 internally. So, you install two small SSDs as a mirrored boot pool, two large enterprise-grade SSDs as SLOG (if you do a VM store) and L2ARC. Figure $1K for the box, $500 for the SSDs. Buy 36 HGST 3TB NAS drives at $125/ea. Set it up as an 18 vdev pool of 2-drive mirrors. You spend $6K, you get the whole thing in one 4U box, same IOPS, and 38TB of usable storage (more than 5X what you would get with your 15K drive arrangement). You could also buy fewer drives (start with 12 drives, let's say, at a cost of $3K) and add pairs in as your needs grow.
In short, in case I've not made myself clear, I think running 100 146GB drives is absolutely insane :)
HA configurations are one of the key differentiators of TrueNAS, the paid version of FreeNAS from iXSystems. It's unlikely they will add that to FreeNAS 10.