4 nics, 2 switches

Status
Not open for further replies.

snas

Cadet
Joined
Dec 9, 2014
Messages
3
First off, let me describe the situation:

We've just received our new server:
- 32 gb ram
- 2x 100gb enterprise ssd's (which we'll put into raid1 for write cache)
- 8x 2TB SAS disks(raid 10)
- 2x 512GB ssd's (raid1, for the read cache)
- 4x gbit nic's
- running the latest version of freenas

What do we currently have
- 3x gigabit switches (storage1, storage2, internet.... 3x 2960s cisco switches)
- 4x ESXi hosts, with 3 nic's per server

We started to brainstorm on how to best connect the new storage unit to get the best possible performance.

First idea was to setup the freenas with: create 2x portgroups (LACP) of 2 nics per portgroup, but then somebody suggested a different approach, setup 4 different ip ranges, configure 2 vlan's per switch, connect 2 hosts into vlan 1+3, and the others to 2+4:

freenas nic1: 192.168.1.1, connected to switch A, vlan 1
freenas nic2: 192.168.2.1, connected to switch A, vlan 2
freenas nic3: 192.168.3.1, connected to switch B, vlan 3
freenas nic4: 192.168.4.1, connected to switch B, vlan 4

srv1, storagenic1, 192.168.1.11, connected to switch A, vlan 1
srv1, storagenic2, 192.168.3.11, connected to switch B, vlan 3

srv2, storagenic1, 192.168.1.12, connected to switch A, vlan 1
srv2, storagenic2, 192.168.3.12, connected to switch B, vlan 3

srv3, storagenic1, 192.168.2.13, connected to switch A, vlan 2
srv3, storagenic2, 192.168.4.13, connected to switch B, vlan 4

srv4, storagenic1, 192.168.2.14, connected to switch A, vlan 2
srv4, storagenic2, 192.168.4.14, connected to switch B, vlan 4


What do you think about such a setup, or would you choose a different approach?
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
If you can use iSCSI for the ESXi hosts then you could do such a setup:

On FreeNAS
  • 1 interface for management (Web GUI, SSH) and updates [172.30.1.1]
  • 3 interfaces for an iSCSI portal [172.30.11.1, 172.30.22.1, 172.30.33.1]
Connect each iSCSI interface to one of the 3 switches (*).

Connect each of the 3 host NICs to one switch and use MPIO in the iSCSI initiator to span over the 3 subnets.

(*) ideally it would be best to have 3 storage switches. If only 2 switches can be dedicated to the storage network, then that's probably also ok. With VLANs you can share the internet switch (isolate normal from storage traffic) or one of the storage switches (isolate the 2 storage subnets).
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Umm.... yeah, you need to slow down. 1TB of L2ARC with 32GB of RAM is going to be a failure for you. You shouldn't exceed 5x your ARC size, which for 32GB of RAM means something in the ballpark of 100-120GB, max.

Also mirroring L2ARCs isn't possible from the WebGUI unless there's some way I'm not aware of to do that. Not to mention it's not really recommended anyway since a failure of the L2ARC just means you'll have to use the pool to get data instead of the L2ARC. :P

I think you have a lot more reading to do before you jump on this. If I had a time machine I'm betting your next thread will be "my performance sucks...why?" as you are falling into the same trap that so many others have fallen into before you.

MPIO is pretty much *always* better than LACP unless there's some network hardware reasons you can't (or shouldn't) do it that way.
 
Status
Not open for further replies.
Top