Freenas to VCenter 6.7 iSCSI VLAN/TAG with Nexus

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Hello All,

My current setup is I have a Dell R710 Freenas 11.2 with 4 x 1GB copper and 2 x 10GB ports connected to a Core switch doing routing and storage interconnect. I have introduced a Cisco Nexus 48 port 10GB switch as layer 2. I want to make sure I have the best practise configuration when it comes to the Freenas setup to the Nexus switch.

Freenas has 3 pairs of dev drives raid 1 mirror for all of them. Each dev pair represents its own storage to VCenter running iSCSI.

1. Nexus is connected to the core switch with 20GB LAG's with LACP Port Channel group 1 which is working and passing all the necessary VLAN's.

2. On the Freenas side, I created a LAGG0 between the Dell R710 Freenas to the Nexus switch using the 2 10GB ports and that is also working. I assigned a layer 3 IP address to LAGG0 using the 2 10GB ports only and I am using this layer 3 IP to manage Freenas. I also have the existing management 1GB port still connected for WEBUI mgmt as well.

3. On the Nexus side I configured the 2 10GB ports as a trunk port with a Native VLAN 100 and LACP Port Channel group 2 and I can get to the layer 3 IP address on LAGG0 on Freenas

Question:

1. For each of the 3 dev pairs for storage, do I configure a 3 separate Portals used for VCenter?

2. If question #1 is yes, do I create 3 VLAN's and assign them to the parent interface LAGG0 and also assign them the layer 3 IP addresses used for each portal?

3. For question#2, these newly created VLAN's would need to be tagged on the Nexus and Freenas as well and on the uplink LAG between the Nexus and the Core switch. Does tagging work really in Freenas and how do you create the equivalent of native VLAN's in Freenas.

Hope I shared accurate information here to suggest the best way to introduce Nexus as a layer 2 switch with Freenas.

Your thoughts?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Freenas has 3 pairs of dev drives raid 1 mirror for all of them. Each dev pair represents its own storage to VCenter running iSCSI.
This is very bad for performance, if I am understanding your terminology correctly. You might want to review these resources to get a grip on the terms because you are either using the wrong terms, or your pool layout is terrible:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://www.ixsystems.com/community...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Overview of ZFS Pools in FreeNAS from the iXsystems blog:
https://www.ixsystems.com/blog/zfs-pools-in-freenas/

Terminology and Abbreviations Primer
https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/
Your thoughts?
All the disks should be in a single pool, it sounds like you are creating three separate pools. With all the disks in a single pool, made of mirrored vdevs, the pool can be split into logical sections using zvols. Be aware that with iSCSI, total pool usage should be held under 50% of capacity. This is due to the copy on write nature of ZFS.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Hi Chris,

Lets table the vdev's and ZFS pools for now. I will tackle that later. I am more concerned with the network portion and VLAN's, Portals, etc as per my initial post.

Thank you,
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I'm traveling so I'll be brief. You do not want any kind of lag or vpc on your iscsi. You should have two independent iscsi networks. One per vlan. Let ESXi manage failover/load balancing.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Kdragon,

Here is my physical connections per ESXi Host which are in a cluster. VCenter is managing all hosts. Let me know if this represents what you were talking about.

Host 1 and 2

vmnic0 = Management in Standard switch connected to Core switch for Management only.

vmnic2 = 1GB VMotion Connected to Cisco Nexus VMotion VLAN 16 Layer 2 Only

vmnic4 = 10GB User Data port in Distributed Switch in VCenter connected to Cisco Nexus (Trunk Port with Native VLAN with the specific allowed VLAN's)

vmnic5 = 10GB iSCSI port in Distributed Switch in VCenter connected to Cisco Nexus for iSCSI VLAN 13 (Host 2 will be VLAN 14)

Freenas 10GB Port 1/2 = Connected to Cisco Nexus as a LAGG0

Change to

Freenas 1GB Copper Port = Manage Freenas WEBUI

Freenas 10GB Port 1 VLAN 13 with Layer 3 IP = Connected to Cisco Nexus as a access port VLAN 13

Freenas 10GB Port 2 VLAN 14 with Layer3 IP = Connected to Cisco Nexus as a access port VLAN 14
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
On the FreeNAS side, that looks better. On the hosts, rack host should be in both vlans for redundancy. Unless you need the VM "user" network to be 10gb, only use both 10gb links for storage on both hosts. If you need more 10gb,buy more cards. Remember if the storage goes down, everything goes with it.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Thanks for confirming the Freenas side. Both hosts are identical because of VMotion and the same ports are in the same VLAN. User endpoint devices are 1GB anyway, so no matter what with exception of my Veeam which is a physical server with 10GB port but that gets plugged in directly to the Nexus.

The Dell R610 which represents the host can only hold a PCI express 2 port 10GB card anyway, so no room for expansion.

Question, in what scenario would you configure the Freenas for LACP/LAG?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Thanks for confirming the Freenas side. Both hosts are identical because of VMotion and the same ports are in the same VLAN. User endpoint devices are 1GB anyway, so no matter what with exception of my Veeam which is a physical server with 10GB port but that gets plugged in directly to the Nexus.

The Dell R610 which represents the host can only hold a PCI express 2 port 10GB card anyway, so no room for expansion.

Question, in what scenario would you configure the Freenas for LACP/LAG?
Lacp/lag is is a great option for load balancing many connections. This is better used for SMB than iSCSI or NFS.

I can find and link better information when I get home.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Its ok, you dont have to send the info about LACP/LAG. Its too bad LACP/LAG it's not recommended for iSCSI as I was hoping for a fatter pipe..lol and no, I dont have any 40GB QSFP+ ports...haha
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
You can still use round robin load balancing. It's built into iSCSI. This works better as it's built into the protocol. LACP/LAG load balancing is based on source IP/MAC hashes so it's only effective with a large number of clients. Not to mention that topology changes (failover) is better handled by the iSCSI protocol rather.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Got it. I wanted to as you something that I noticed. When I make a change to the VLAN such IP address change or deleting it and I restart Freenas. it really messes up the VM's especially when I dont do a graceful shutdown of all VM's. There are a lot VM's to poweroff, so what would you recommend when it comes to restarting Freenas and avoiding corruption?
 
Top