iSCSI Multipath help

Status
Not open for further replies.

optikul81

Cadet
Joined
Oct 12, 2015
Messages
3
First, here is the setup I am using:

Dell PE R410
FreeNAS 9.10.2-U1
2x Xeon e5520 Quad Core CPUs
12GB ECC RAM
4x 2TB Seagate Constellation ES drives in Raid Z1
1x 16GB Sandisk Cruzer as Boot drive
DELL PERC 6/IR (Flashed to LSI IT Firmware)
Dual Broadcom 1GB NICS
Primary Interface - 10.1.100.217/24
Alias 10.1.101.129/25
Secondary Interface - 10.1.101.1/25

Connected to Dell PowerConnect 5448 Gigabit Managed switch. LACP is not being used.

Connected to Dell PE R610
XenServer 7.0
Interface 1 - 10.1.101.2/25
Interface 2 - 10.1.101.130/25

I have both IPs listed under Portal 1
Under Xen, it shows both paths as connected (2 of 2 paths active)
* LUN Selected to allow Multipath

During testing, I am only getting 110MB/s max transfers using CrystalDiskMark 5.1.2 x64
FreeNAS is showing only one path is being used. bce0 is the primary interface and is the only one being utilized while bce1 is sitting idle.

Any idea's what I could be doing wrong in the configuration? This is just for a home lab, so the speed is not an absolute must. But since I can't add a 10GB card to the R410, this is the only choice I have to increase bandwidth.

Thanks!
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
I assume you are using ONLY the /25 networks for iSCSI (the 10.1.100.217/24 is for FreeNAS mgmt?)? You're only seeing FreeNAS utilize one interface because you are using the same IP subnet for both. You should be using different subnets for each interface if you want MPIO to work. FreeNAS is only going to send traffic out of one interface if they are on the same subnet. Also...MPIO will never give you an increase in bandwidth...only throughput (meaning if you had multiple clients, they could theoretically both get 110MB at the same time). The 110MB speed is all you're going to see out of your 1Gb network.
 

optikul81

Cadet
Joined
Oct 12, 2015
Messages
3
I assume you are using ONLY the /25 networks for iSCSI (the 10.1.100.217/24 is for FreeNAS mgmt?)? You're only seeing FreeNAS utilize one interface because you are using the same IP subnet for both. You should be using different subnets for each interface if you want MPIO to work.

bigphil, thank you for your response. You are correct, I am only using the 10.1.100.0/24 for management.

The other subnets are strictly for iSCSI with no default route assigned.

Management - 10.1.100.0/24
ISCSI Network 1 - 10.1.101.0/25
ISCSI Network 2 - 10.1.101.128/25

Each interface is on its own network.

I will test with two separate hosts. I tested with two VMs on the same server instead of separate hosts.

Unfortunately I'm stuck with the built in nics until I build out a new NAS box.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
You still need to use different subnets for iSCSI, i.e. 10.1.101.x, 10.2.101.x otherwise you'll still see traffic only go out one interface on FreeNAS.
 

optikul81

Cadet
Joined
Oct 12, 2015
Messages
3
bigphil,

I have to admit, I was skeptical that moving the iSCSI networks from two /25's to /24 subnets would make a difference. But after testing, it appears to be working. I had two separate sessions going and both were accessing different NICs and both were transferring up to 110MB/s each!

I find it odd that FreeNAS can't differentiate between a /25 subnet and a /24 subnet....
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
bigphil,

I have to admit, I was skeptical that moving the iSCSI networks from two /25's to /24 subnets would make a difference. But after testing, it appears to be working. I had two separate sessions going and both were accessing different NICs and both were transferring up to 110MB/s each!

I find it odd that FreeNAS can't differentiate between a /25 subnet and a /24 subnet....
FreeNAS would have no problem with a /25 but the issue that I saw with your setup was that although you had two interfaces, they were on the same subnet. FreeNAS, by default, will only route back out on one interface with that kind of setup...hence configuring a different subnet on each interface so traffic will go out both.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I think the question should be addressed to XenServer. VMware has control for load balance algorithm to set how multiple links should be utilized. But in any case take into account that it may not always be possible to split the load between multiple links. In case of single read stream, attempt to split the load may cause requests reorder, that confuse ZFS read prefetcher, that may reduce performance below expected. I would say it is safe mostly in case of multiple concurrent VMs, when each can be routed via different path, but I have no idea whether XenServer can do it.
 
Status
Not open for further replies.
Top