Can you check what the ashift values are for each pool?
Anyways, it appears that "zfs list" reaches its estimate of pool free space based on imagining that the pool is filled with 128K blocks, which is maybe useless. It's possible Nexenta uses something different to compute it.
Yep, I came here to say this. ZFS always uses as assumption of 128K record sizes for it's freespace calculation.
Nexenta is obviously the Illumos version of OpenZFS and it doesn't actually have an ashift parameter that you can specify in a zpool create command. (maybe they added it finally?)
Illumos based OS also don't automatically set the ashift to 12 if the HDD has 512b emulation capability and the drive is not listed in the
/kernel/drv/sd.conf file which overrides the ashift value and will force the disk to use ashift=12.
FreeBSD, Linux, and OSX on the other hand check the drive to see if it reports that it has 4K sectors and automatically sets the ashift to 12 even if the drive has 512b emulation capability.
There is also a zfs kernel parameter named "spa_slop_shift" that controls how much of the pool's free space is reserved and un-usable and this also cuts down the AVAIL property. I'm not completely sure on the current status of this kernel parameter however between FreeBSD and Illumos, but it certainly could differ.
One thing you can do, as others have mentioned is to use 1 MiB record sizes on an ashift=12 pool. Assuming your files aren't tiny This will result in the same actual usable capacity as an ashift=9 pool (even though the AVAIL is still reported less since that stat is calculated still assuming 128KiB records, not 1 MiB records.)
The reason that ashift=12, 1 MiB records will lead to the same space efficiency of ashift=9, 128KiB records (for large enough files) is because ashift=12 4K sectors are 8x larger than ashift=9 512b sectors, and 1 MiB records are also 8 times larger than 128KiB records, so the efficiency of distributing the file across the disks and the parity space efficiency is equal.