iSCSI Storage to VCenter and SMB LACP

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Hello,

A couple of things.

iSCSI Storage: Pool 1 with 8 x 4 TB drives configured as 2 mirrored drives in a vdev (4) with a separate SLOG and Cache drives in this pool.

I am planning to connect my Freenas to VCenter using iSCSI. I know that it is not recommended to do LACP/LAGG for iSCSI on Freenas or VCenter.

What would be the best practice for using 2 x 10GB SFP+ ports for iSCSI on Freenas connected to my Datacenter switch?

Here is my hardware setup for the network adapters:

Freenas 11.2

6 x 10GB SFP+ Ports, 4 x 1GB Copper ports.


VCenter Cluster

3 x ESXi Hosts each with 4 x 10GB SFP+ ports and 4 x 1GB Copper Ports

2 x 10GB SFP+ Ports from each host will be LACP connected to Datacenter switch for VM Traffic for VLAN's.

2 x 10GB SFP+ will be used for iSCSI to Freenas.

1 GB copper ports will be used for VMotion and ESXi Management.

SMB on Freenas: Pool 2 RAID 6 using 4 x 4TB drives for Files.

Is it recommend to use 2 x 10GB SFP+ ports for SMB file sharing on Freenas using LACP, if not, what would be the best practice to use the 10GB ports for file sharing?

Thanks,
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Use two of the 10GbE ports in LACP for the file share. Two more ports, independent, in separate VLANs/non-overlapping subnets for iSCSI - set a port on each ESXi host to use the same subnet.

Your final two 10GbE ports on the FreeNAS could be teamed for future use in a replication or backup solution, or left idle for now.

What are your intended cache and SLOG devices? For multiple 10GbE you should be looking at very high-end NVMe like Optane P4800 series or even trying to get NVRAM support.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Thanks for the clarity. That was what I was expecting to hear back. But get this, the Intel DC P3700 when plugged into the Dell R730xd made the FAN operate at 15000 RPM's. So, I am using 2 x 146GB SAS NON-Raid Drives in the back of the Dell for Cache.

For SLOG = Intel OPTANE SSD 900p (280gb AIC PCIe 3.0 X4 20nm 3d Xpoint)

For Cache = Wanted to use this SSDPEDMD800G4 Intel DC P3700 800GB NVMe PCIe 3.0 40 NAND SSD Solid State Drive, but now using 2 x 146GB SAS NON-RAID drives for Cache.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
The Optane doesnt make the fans go to 15000, only around 8500 which is normal. The 146GB's in the rear bay came with it and doesnt affect the fan speed.

Question: Since i will have two separate VLAN and separate subnet for each iSCSI 10GB NIC on Freenas, 172.16.13.0/24 and 172.16.14.0/24, Can I just create one Portal, say 172.16.13.1 on Freenas and use this same portal on VCenter for both iSCSI ports or should I create another portal 172.16.14.1 on Freenas and apply it as a target on second iSCSI NIC on VCenter?
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Actually I found the answer to the number of portals required. Pretty much one Portal IP mapped to each iSCSI on VCenter.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Actually I found the answer to the number of portals required. Pretty much one Portal IP mapped to each iSCSI on VCenter.
Correct; you need a single portal listening on multiple IPs (eg: 172.16.13.1 and 172.168.14.1) - your VMware systems should then be able to find and access both paths during the discovery if given a single IP address.
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
vmware you have to tell the LUN to do round robin and you have to use the command line to tell it IOPS=1 as the default is IOPS=1000

It is easier to use two VLAN's and just use NFS 4.x, iSCSI sucks, if you have a mis-configured lun and reboot your vmware host expect a long boot up 15-30 minutes... NFS 4.x there is not any round robin settings or iops=1 it just works when you supply two ip addresses.

When you use LACP you might get both NFS connections on eth0, so you will either have to do something odd on the FreeNAS side, on VMware just prefer 1 port for VLANa and the other port for VLANb

vMotion wants to use your 10gig connections. :smile:
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
RJ,

You have me thinking. I am going to do some research NFS vs iSCSI and look at performance IOPs test/latency with different operating systems.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Here's a thread that, while a bit old, offers a bit of info about testing the two:

 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Interesting on that link you sent. Fortunately for me, I am in the pre-stage process so i can switch to NFS if need be. I am not familiar with NFS on VCenter and Freenas.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
In short: Do you recommend 2 x 10GB Ports as LACP on Freenas using NFS Protocol and on the VCenter side, LACP in vDS using NFS?

For file sharing, 2 x 10GB ports as LACP with SMB?

On the ESX Host side, 2 x 10GB ports for NFS mapped to VCenter VDS and the other 2 x 10GB Ports for VM Traffic/VLANS. I dont have another 10 GB ports for VMotion so I will use the 1GB Copper ports in LACP in VDS.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm not 100% sure on the status of the NFS 4.1/pNFS support between VMware and FreeNAS, but I do know that the iSCSI target is quite mature at this point and has full support for all of the VAAI primitives. If you're more familiar with iSCSI, I would be tempted to suggest you stay there.

If you do go to NFS though, the way you have it designed will work out fine for single NFS links.

If you have vSphere Enterprise (or Platinum) I would be tempted to use less LACP and more failover unless you feel that any one vmkernel interface will require >10GbE on a regular basis, that way you would be able to have 10GbE for your VM/VLAN aggregates, 10GbE for your vMotion, and 10GbE for your NFS/storage traffic, etc.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Ya, i am more familiar with iSCSI so I will stick to that.

I have vPSHERE Enterprise with VCenter. I will use LACP but will use the 1GB copper for VMotion. I know you said you 10GB for VMotions but I only have 4 x 10GB ports on the Dell R610's.

2 x 10GB Ports on separate iSCSI networks for multipath iSCSI, 2 x 10GB Ports in LACP in vDS for VM Traffic/VLANs.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If you have vSphere Enterprise, you also have Network I/O Control - you could set up 2x10Gb in non-LACP failover with your VM/VLANs being primary on one, and vMotion being primary on the other, and NIOC enforcing a 50% share maximum between the two so in a failover scenario no one traffic type can overwhelm a single interface. iSCSI could be left as 2x10Gb MPIO. We're getting outside the realm of FreeNAS tuning though.
 
Top