ESXI + ISCSI + MPIO 10G

Status
Not open for further replies.

hidagar

Cadet
Joined
Sep 27, 2017
Messages
7
Hi all,

I'm tring to setup a ESXI 6.5 with ISCSI + MPIO with FreeNAS. I setup this following conf:

ESXI: 1vmnic with 172.20.104.43 - Round Robin
1vmnic with 172.20.105.43 - Round Robin

Switch split in 2 vlan Untagged each one as separader networks

FreeNAS Setup : IX0 - 172.20.104.20
IX1 - 172.20.105.20

Al is configured with MTU 9000.
The problem is when I enable both path I have really slow ESXI and not discovering nothing, is like a loop. Any idea?


BR
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Strat with setting a normal MTU of 1500 on the vmknics, vSwitchs, physical switch, and the freenas interfaces. then if its still not working, check your extent-target mappings in freenas.

Also just a heads up, you will not see much more than about 940MBps even with the RR. though you can set your RR IO to less than 1000 and get better IOPS depending on the number of hosts, VMs, and workloads.
 
Joined
Mar 22, 2016
Messages
217
MTU 9000. I've never had any luck with ESXi and FreeNAS with the MTU set to anything over 1500.
 

hidagar

Cadet
Joined
Sep 27, 2017
Messages
7
Thanks for your help,

I change all to 1500 and now is working. I don't know why because I check both ethernet card and is working with Jumbo Frames....

With defaults IOPS = 1000 I get this results:
Writte 350Mbps per each NIC, totally = 700Mbps
Read 255Mbps per NIC, totally = 510 Mbps

Set IOPS = 1 I get this results:
Writte 350Mbps per each NIC, totally = 700Mbps
Read 295Mbps per NIC, totally = 590 Mbps

I use SSD HDD and test it with dd command. I should get more speed???? I should use 1 or default values? the results is motsly the same.

BR
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Thanks for your help,

I change all to 1500 and now is working. I don't know why because I check both ethernet card and is working with Jumbo Frames....

With defaults IOPS = 1000 I get this results:
Writte 350Mbps per each NIC, totally = 700Mbps
Read 255Mbps per NIC, totally = 510 Mbps

Set IOPS = 1 I get this results:
Writte 350Mbps per each NIC, totally = 700Mbps
Read 295Mbps per NIC, totally = 590 Mbps

I use SSD HDD and test it with dd command. I should get more speed???? I should use 1 or default values? the results is motsly the same.

BR
We would need to know exactly how your FreeNAS is configured. What model drives, how many, controller, backplane if any, memory, CPU model, motherboard, ZFS array config, ZFS options, and probably a bunch of things i'm forgetting.

Until we have that information, my answer to you question "I should get more speed????" is Cats.
 

hidagar

Cadet
Joined
Sep 27, 2017
Messages
7
We would need to know exactly how your FreeNAS is configured. What model drives, how many, controller, backplane if any, memory, CPU model, motherboard, ZFS array config, ZFS options, and probably a bunch of things i'm forgetting.

Until we have that information, my answer to you question "I should get more speed????" is Cats.
The server is Dell PowerEdge R510 (12 bays) 2x E5620 xeon 2.40Ghz 64GB RAM with PERC H200 flashed LSI 9211-8i IT mode and the SSD Drives is Kingston HyperX Savage SSD 480GB SATA3

Thanks
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The server is Dell PowerEdge R510 (12 bays)...the SSD Drives is Kingston HyperX Savage SSD 480GB SATA3
Does this mean you have 12 SSDs? How is you pool configured? Do you have compression enabled? Do you have a SLOG on you pool? Its you pool Sync Always?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
let's start with your pool and pool settings.
zpool list -v
zfs list -o
 

hidagar

Cadet
Joined
Sep 27, 2017
Messages
7
let's start with your pool and pool settings.
zpool list -v
zfs list -o
Code:
root@freenas:~ # zpool list -v
NAME									 SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
VMware_HDD							  18.1T   109M  18.1T		 -	 0%	 0%  1.00x  ONLINE  /mnt
  raidz1								18.1T   109M  18.1T		 -	 0%	 0%
	gptid/bfc64e9c-2cfa-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
	gptid/c0db5ce7-2cfa-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
	gptid/c1e6179d-2cfa-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
	gptid/c2f0a7b6-2cfa-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
	gptid/c3fa9e49-2cfa-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
spare									   -	  -	  -		 -	  -	  -
  gptid/c61a15fb-2cfa-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
VMware_SO							   1.73T   718M  1.73T		 -	 0%	 0%  1.00x  ONLINE  /mnt
  raidz1								1.73T   718M  1.73T		 -	 0%	 0%
	gptid/fa25c797-2c51-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
	gptid/fa9b72a9-2c51-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
	gptid/fb0e4dfb-2c51-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
	gptid/fb87e93a-2c51-11e8-a5f2-d4ae527be857	  -	  -	  -		 -	  -	  -
freenas-boot							 111G  2.36G   109G		 -	  -	 2%  1.00x  ONLINE  -
  mirror								 111G  2.36G   109G		 -	  -	 2%
	da0p2								   -	  -	  -		 -	  -	  -
	da1p2								   -	  -	  -		 -	  -	  -


root@freenas:~ # zfs list
NAME														 USED  AVAIL  REFER  MOUNTPOINT
VMware_HDD												  12.2T  1.84T   141K  /mnt/VMware_HDD
VMware_HDD/VMware_DATA									  8.13T  9.96T  56.3M  -
VMware_HDD/VMware_HDD_SO									4.06T  5.90T  28.8M  -
VMware_SO												   1.02T   210G   128K  /mnt/VMware_SO
VMware_SO/.system											512M   210G   508M  legacy
VMware_SO/.system/configs-76c11d7f8a944b3d8e42fe35420dbaa3  2.99M   210G  2.99M  legacy
VMware_SO/.system/cores									  610K   210G   610K  legacy
VMware_SO/.system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3	   128K   210G   128K  legacy
VMware_SO/.system/samba4									 145K   210G   145K  legacy
VMware_SO/.system/syslog-76c11d7f8a944b3d8e42fe35420dbaa3	128K   210G   128K  legacy
VMware_SO/VMware_SO										 1.02T  1.22T  8.39M  -
freenas-boot												2.36G   105G	64K  none
freenas-boot/ROOT										   2.34G   105G	29K  none
freenas-boot/ROOT/11.1-RELEASE							  3.92M   105G   829M  /
freenas-boot/ROOT/11.1-U3								   2.34G   105G   836M  /
freenas-boot/ROOT/Initial-Install							  1K   105G   727M  legacy
freenas-boot/ROOT/default									159K   105G   727M  legacy
freenas-boot/grub										   6.85M   105G  6.85M  legacy



If I use zfs list -o I get this error

zfs list -o
invalid option 'o'
usage:
list [-Hp] [-r|-d max] [-o property[,...]] [-s property]...
[-S property]... [-t type[,...]] [filesystem|volume|snapshot] ...


THX
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
At first glance, if you are intending to get the most performance out of the pool for a VM datastore, you would usually be using mirrored pairs, not Z1. You can do some local testing (DD or iozone), but I imagine that's going to yield a higher result.
 

hidagar

Cadet
Joined
Sep 27, 2017
Messages
7
I want to have performance but also stability and reliability, do you think Z1 is the best I can use? Do you recommend another type of configuration?

I have 4 SSD + 6 HDD

Thank you
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
I want to have performance but also stability and reliability, do you think Z1 is the best I can use? Do you recommend another type of configuration?

I have 4 SSD + 6 HDD

Thank you
Cyberjock has a post where he'll try to convince you that Z1 should not be used anymore. I think it was on the basis that it takes too long to re-silver with new (huge) drives.

Most people doing high performance VMware datastore stuff swear by using mirrors. Since it's basically RAID10, you get the reliability of a mirror for every vdev, but have to give up half of your usable space, everything is a trade off. Z1/Z2 will get you more capacity, but (usually) lower performance since you have less vDEVs to absorb IO. If you keep researching (which is smart) on the forum, there are bunches of posts. jgreco has some good ones talking about high performance configs.

In the end, you don't have to take anyone's word for it, after you read some good testing posts that will tell you about turning off compression and ensuring you don't get cached results, you can do your own testing against the different disk pool configs you want to use and see which one works best for you.

My two cents, mirrored pairs for a VMware datastore; all day, every day!
 
Status
Not open for further replies.
Top