Hello,
In our office we have a freeNAS box serving the network over 4x1Gbit interfaces in a LAGG and it also has another interface not in the lag running on DHCP as backup. Up until last week when we fired up a third VM on our NAS in the office everything seemed to work right away no problem.
Somehow over the weekend the Windows machines decided to restart over night, and freeNAS decided to remake the bridge that feeds all VM and swapped the single DHCP interface and pun the LAGG instead. This morning I've had to manually remove the LAGG from the bridge and add the dhcp interface back for everything to work. The thing is that the bridge works even with the lagg interface, but it leads to loss of packages and missing pings. I really can't use the lagg for bridging, that's why I'm using a dedicated interface for the VMs.
The problem was that once everything was running again, one windows machine decided it needed another reboot....and the bridge was recreated as before. The setup I would like to have is this one:
I've managed to find a blogpost online that seems to help fix this thing but I really don't want to reboot the NAS right now while everyone is still complaining about connectivity. I'll be rebooting at the lunchbreak and test if this solution works.
So from people with far more experience with these things than me, is the a way to flag either the lagg0 or the igb0 interface to be used for the bridge and not the other one. I don't want to deletemember and addmember each time a VM decides to reboot.
In our office we have a freeNAS box serving the network over 4x1Gbit interfaces in a LAGG and it also has another interface not in the lag running on DHCP as backup. Up until last week when we fired up a third VM on our NAS in the office everything seemed to work right away no problem.
Somehow over the weekend the Windows machines decided to restart over night, and freeNAS decided to remake the bridge that feeds all VM and swapped the single DHCP interface and pun the LAGG instead. This morning I've had to manually remove the LAGG from the bridge and add the dhcp interface back for everything to work. The thing is that the bridge works even with the lagg interface, but it leads to loss of packages and missing pings. I really can't use the lagg for bridging, that's why I'm using a dedicated interface for the VMs.
The problem was that once everything was running again, one windows machine decided it needed another reboot....and the bridge was recreated as before. The setup I would like to have is this one:
Code:
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=4019b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,VLAN_HWTSO> ether 00:26:55:e1:f2:91 inet 10.1.0.11 netmask 0xffffff00 broadcast 10.1.0.255 nd6 options=9<PERFORMNUD,IFDISABLED> media: Ethernet autoselect status: active groups: lagg laggproto loadbalance lagghash l2,l3,l4 laggport: em0 flags=4<ACTIVE> laggport: em4 flags=4<ACTIVE> laggport: em5 flags=4<ACTIVE> laggport: em6 flags=4<ACTIVE> bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 ether 02:50:c0:6b:2d:00 nd6 options=1<PERFORMNUD> groups: bridge id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200 root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP> ifmaxaddr 0 port 13 priority 128 path cost 2000000 member: igb0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP> ifmaxaddr 0 port 5 priority 128 path cost 55 member: tap1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP> ifmaxaddr 0 port 15 priority 128 path cost 2000000
I've managed to find a blogpost online that seems to help fix this thing but I really don't want to reboot the NAS right now while everyone is still complaining about connectivity. I'll be rebooting at the lunchbreak and test if this solution works.
So from people with far more experience with these things than me, is the a way to flag either the lagg0 or the igb0 interface to be used for the bridge and not the other one. I don't want to deletemember and addmember each time a VM decides to reboot.