Multiple NIC's + iSCSI

Status
Not open for further replies.
Joined
Oct 2, 2014
Messages
925
I have rewacked my plans to use a 10GB connection as there is lack of drivers/support for FreeNAS, but thats another subject :P

Anyways, i plan to use a Intel 4 port 1GB NIC card in the FreeNAS server directly connected to my server via another Intel 4 port NIC.

Is it possible to configure either a 2 NIC team, or 4 NIC (all NICs) team to connect to my server via iSCSI?
I did not see much information i am currently waiting on the 2 quad port NIC's to come in otherwise i would do some trial and error.

Or any other suggestions are welcome please and thank you
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
There is pretty solid support for 10Gb - Chelsio and Intel seem to be the 2 big brands.

As for teaming, yes there is support, but you should investigate if it will suit your environment. Unless you are supporting multiple connections/clients, you aren't going to see a huge speed increase.
 
Joined
Oct 2, 2014
Messages
925
The 10Gb NIC i have are by Qlogic, already tried and its a no go so all well :P , as per the teaming its to support 1 server. But i will essentially be moving 6TB out of 8TB from the current drives onto this iSCSI share and then the drives that had the data moved from will then be relocated inside the FreeNAS server and paired with other drives to create more storage.

I was mainly trying to get more throughput out of it, i will be using 2 groups of x7 4TB drives + a a single group of x7 2TB hdds, and finally for my VM's x7 240GB SSD's , all in Raidz3. My concern was saturating a single 1GB connection and that connection being the bottle neck.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You aren't going to see any performance improvement if you are just connecting one server for the transfer.
 

abcslayer

Dabbler
Joined
Dec 9, 2014
Messages
42
Hi, to improve performance with multiple NICs (teaming/LAGG...) you should arrange your system so there are multiple connection to the iSCSI target, it is called multi-path.
The most simple trick I could think about is using multiple VLANs (so there are multiple different paths to connect to your target, LAGG is not neccessary in this case).
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
It really depends on the use case. Assuming the "server" is a virtualization host, and assuming that the host SW supports mpio, then yes, in aggregate you'll get 4Gbps effective to the iscsi connection. (I do this with my VMware esxi hosts for example.)

If the VMs are connecting to freenas other than their .vmdk (or equivalent) then It depends on how they connect to freenas. If the VM supports mpio and its connecting to an iscsi target, then yes, you can get 4Mbps. If it connects some other way, probably not.

(And abcslayer is right about how to configure - each nic goes on a separate subnet.)
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
with iSCSI you are better off running two NIC's independently for two separate paths. Bonding them together still confines you to the speed of a single NIC, 1GB for each connection. If you have just this one connection that it buys you nothing, hosting iSCSI targets where multiple clients are connecting from different addresses then they will get their connections spread across the bundle, each one limited to the the NIC chosen for their conversation. With OS's like ESXi and other MPIO aware OS's / Drivers you can configure 2 nic's across 2 paths and load balance traffic in that manner. Again one conversation or stream is limited to that NIC which is resides upon but a second stream or conversation can be started onto the 2nd NIC card. With LACP the algorithm that dictated the first conversation use a specific NIC as its path thru the bundle or lagg aggregation group is exactly the same one used for every conversation. So this particular PC with this IP and MAC address will ALWAYS take this specific path via this nic, etc. another pc having a unique IP and mac address will get hased with same algorithm but get a different result due to each address being unique, that pc will however always use that same path, and same path for all its conversations or streams.

Depending on the iSCSI environment I am building for a particular customer as someone said above you usually use 2 subnets for the 2 paths, this is best practice to ensure traffic separation and disparate paths. You can however in a local environment, say using ESXi get the same benefits using 2 IP's on the same subnet, so long as they are seperated. If using 2 NIC's then each NIC would be in its OWN vSwitch, and each have its own vmKernel IP address within its own vSwitch. In this manner they can reside on same network and still load balance and see both paths with a more simplistic setup shall your not have a switch with vlans or what have you.
 
Joined
Oct 2, 2014
Messages
925
Thanks for the replies everyone, Dave thank you very much for your well written post. I have actually since picked up a Chelsio 10GB card to use in the FreeNAS server and using one of my other 10GB cards in my windows server. I looked into how much aggravation multiple NICS will give me, and your post pretty much confirmed it, i went on with my life with the 10GB cards installed and repurposed the x2 4 port gigabit NICS i had purchased and everything is swell thus far; i have alot more testing to do with the 10GB cards...but hey its a process.
 
Last edited:

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
Thanks for the replies everyone, Dave thank you very much for your well written post. I have actually since picked up a Chelsio 10GB card to use in the FreeNAS server and using one of my other 10GB cards in my windows server. I looked into how much aggregation multiple NICS will give me, and your post pretty much confirmed it, i went on with my life with the 10GB cards installed and repurposed the x2 4 port gigabit NICS i had purchased and everything is swell thus far; i have alot more testing to do with the 10GB cards...but hey its a process.
IT IS ! and enjoy the journey. Nearly every day I am moving customers over to 10GB or 40GB network links and its fun to see the amazement in their eyes when I turn over "the keys" to their new UCS server farm, or new data center Nexus Core or what have you. 10GB alone is worth it all the way, the reduced latency alone is noticeable beyond words. Time go get my 10GB nics back in place between freenas and vmware :)

dave
 
Joined
Oct 2, 2014
Messages
925
The data center i work in just started doing the 40GB for the backbone infrastructure in our newest data center expansion....theres been 2 since the facility was built. The newest expansion is all 10GB servers, granted i install the cabling,servers,and nexus switches 2k,5k,7k's and dont get to see any real speeds or through put but im sure its astonishing.

I'm only connected via 10GB from FreeNAS server to windows server...so no other use for now :P
 
Status
Not open for further replies.
Top