Find the bottleneck? Freenas 12 and ESXi 6.7

aaronski1

Cadet
Joined
Feb 7, 2019
Messages
6
Hi there! New server new problems. The Truenass server is a Dell R730xd, 320GB ram, 12x8TB SSD's, quad 10gb nice. only one hooked up at the moment though.
The esxi server is the same with no drives. booting of sd card.
the Truenas is stock, installed TrueNas12.0 release, setup a Raidz, 6 drives, then added another raidz 6 drives and extended the vdev, so it's essentially a raid 50. compression is off, dedup also off.
here are the speeds I'm getting:
when I do a write test, DD if=dev/one of=/mnt/r50/somefile bs=4m count=10000 I get 1.6GBPS which is fine.
when I do a read test, dd if=mnt/r50/somefile of=/dev/null bs=4m count=10000, I get 3.6GBPS which is great.

Then I created a nfs share and a isci share, and mounted a virtual drive in esxi using both nfs and iscsi to test. when I do a disk test within the vm, I get the following:
NFS
4k random io- READ 1.06GBPS. which is max throughput for the pipe and fine.
4k random IO write- 140MBPS. Which is 1/30th of the DD speed. This is crap. But it's NFS and ESXI I hear you say! I agree, so, scratch that, we'll do iscsi:

4k random io read is now down to only 850MBPS, but
4k random io write is pegged at 1.05GBPS. Which is great.

My question is: we know the drive is capable of more than 10gbit read and write, likely much more, what can I tweak to get the extra 200MBPS out of the iscsi read speed? I know the network and servers can do it because it gets that speed using nfs. So it seems it has to be an issue with iscsi settings on either esxi or Truenas.Please let me know what other info you'd like from the servers, happy to provide details. Once I'm back in the office in a few months I'll setup MPIO and see if I can full disk speed using multiple 10gb links.
 
Top