The big problem with CoW filesystems like ZFS is fragmentation. When you lay down a vmdk on a blank ZFS pool, the blocks can be laid down sequentially in the pool, meaning no seeks and fast write. However, when you go to update a block, ZFS allocates a different block and writes your file data block there instead. So now when you're reading what you might expect to be a contiguous set of blocks, you get read/seek/read/seek-back/read. ZFS needs lots of free space in the pool to have any chance of making semi-rational allocations.
So let me cut out all the crap in the middle. You want to store 1.5TB. You need an absolute minimum of 3TB exclusively for the VM storage. I've written about pathological ZFS fragmentation cases where you might actually need more than 15TB to semi-sanely store 1.5TB, but that's usually not the case in the real world unless you have a crazy VM like a very busy mail or database server.
So if you're set on the idea of sharing the machine's purpose, the best thing I can suggest would be two mirrors of 2TB drives striped (4TB usable space), for your virtual disk storage, then take the remaining disks and make a RAIDZ1 out of them, or maybe a separate mirror pool and then maintain a warm spare drive. I'm sure it isn't as much space as you were hoping for.
The other alternative is to accept extremely reduced performance and just put them all in a RAIDZ2. But this is also bad because beyond just being on RAIDZ2, user data typically expands to fill all available space (the UNIX sysadmin's ancient lament) and you'll end up compromising free space requirements and your pool will slowly get slower and slower as both fragmentation increases AND free space dwindles.
Don't shoot the messenger, he knows it sucks.