Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Please help - 10gb-10gb-1gb setup without 10gb switch

rogerh

FreeNAS Guru
Joined
Apr 18, 2014
Messages
1,069
Thanks
118
#21
An observation while you await a sensible reply. This seems to be a complicated interaction involving Netbios names, possibly DNS, samba on Freenas and CIFS/SMB on windows: I'd just be grateful that it works at all. I see no harm in mapping network drives using //freenas. If it fails one day you can go back to IP addresses. Seriously, if you want something to be reliable, or work in a script, I would use the windows syntax (which involves \\ but otherwise I've forgotten) for a network connection, using the IP address, rather than a mapped drive. Mapped drives often seem to fail to connect for no obvious reason.
 

rogerh

FreeNAS Guru
Joined
Apr 18, 2014
Messages
1,069
Thanks
118
#23
Personally, I'd stick to the IP version if you want to be sure of getting the 10GB connection. Or put this IP in the Windows Hosts file.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#24
FreeBSD includes everything that you need to set up bridging and I don't see any of it missing in FreeNAS. You can try this:

From the command prompt, type

# ifconfig bridge0 create
# ifconfig bridge0 addm em0 addm em1

This essentially turns FreeBSD into a two port switch for em0 and em1. Configure em0 normally via the GUI and bring up em1 manually with

# ifconfig em1 up

The bridge could be set up to start on boot with some scripting. In a simple FreeNAS install without jails etc it seems likely that this should just work out of the box, and has the added advantage of not having multiple subnets and other complications. Obviously substitute your interface names for em0 and em1.

We use bridges extensively around here and they are pretty darn robust, and can support things like spanning tree as well. We have no reason to do this on a NAS but the experiment is incredibly simple. If some variation of the above works for you, we can probably work out a way to make it "stick."
 

sfcredfox

FreeNAS Experienced
Joined
Aug 26, 2014
Messages
316
Thanks
43
#25
FreeBSD includes everything that you need to set up bridging and I don't see any of it missing in FreeNAS. You can try this:

From the command prompt, type

# ifconfig bridge0 create
# ifconfig bridge0 addm em0 addm em1

This essentially turns FreeBSD into a two port switch for em0 and em1. Configure em0 normally via the GUI and bring up em1 manually with

# ifconfig em1 up

The bridge could be set up to start on boot with some scripting. In a simple FreeNAS install without jails etc it seems likely that this should just work out of the box, and has the added advantage of not having multiple subnets and other complications. Obviously substitute your interface names for em0 and em1.

We use bridges extensively around here and they are pretty darn robust, and can support things like spanning tree as well. We have no reason to do this on a NAS but the experiment is incredibly simple. If some variation of the above works for you, we can probably work out a way to make it "stick."
I wouldn't do this with my NAS, but thanks for sharing, that's a very cool capability.
 
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,850
#26
I'm 1/2 tempted to try this someday just to see how well it would work. :p
 

sfcredfox

FreeNAS Experienced
Joined
Aug 26, 2014
Messages
316
Thanks
43
#27
I'm 1/2 tempted to try this someday just to see how well it would work. :p
I'd be interested in the results of that, maybe try it on a test or virtual.

I don't think I would ever want to for a few reasons: #1, it goes beyond what the product/OS was intended for, but #2, wouldn't the result of doing so throw a lot more work load on the system?

On a small network like the OP's, probably mute, but on a large network that's chatty, wouldn't that be a crap ton of extra crap the system now has to process and work into the mix? I guess if your system is overkill on CPU, less of a big deal? I just wonder if the extra work load would amount to nothing and your system doesn't care, or if you're now trading off available resources working that in with your desired storage traffic.

We use bridges extensively around here and they are pretty darn robust, and can support things like spanning tree as well.
Since this is still related to OP, can you share any insight since you're doing it on other hosts? Is there a significant/noticeable impact? Is there any reason to worry? Or don't matter?
 
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,850
#28
It would definitely increase the workload. But in some cases it might work well. I'd definitely make sure there is no bridge already. In my case I already have a bridge0, so I'd have to create a bridge1.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#29
It would definitely increase the workload. But in some cases it might work well. I'd definitely make sure there is no bridge already. In my case I already have a bridge0, so I'd have to create a bridge1.
If the existing bridge0 already covers one of the desired interfaces, it'd probably be best to add the new interface to the existing bridge ("ifconfig bridge0 addm em1"). The complication would seem to be understanding what was going on with the existing bridge in terms of how its plumbing works, because if the system ever tries to destroy the bridge (maybe if a jail is stopped?) that'd disconnect the bridged host.

I'd be interested in the results of that, maybe try it on a test or virtual.

I don't think I would ever want to for a few reasons: #1, it goes beyond what the product/OS was intended for, but #2, wouldn't the result of doing so throw a lot more work load on the system?

On a small network like the OP's, probably mute, but on a large network that's chatty, wouldn't that be a crap ton of extra crap the system now has to process and work into the mix? I guess if your system is overkill on CPU, less of a big deal? I just wonder if the extra work load would amount to nothing and your system doesn't care, or if you're now trading off available resources working that in with your desired storage traffic.

Since this is still related to OP, can you share any insight since you're doing it on other hosts? Is there a significant/noticeable impact? Is there any reason to worry? Or don't matter?
It is, of course, pilfering some amount of capacity from the system, and in terms of packets per second, a gigabit ethernet connection is capable of nearly 1.5 million packets per second, and you do not want to be bridging that sort of traffic load. However, even heavy production networks aren't likely to actually be needing to do that. The goal is usually to be able to forward a reasonable amount of mixed traffic, and usually a client that is maxxing out a gigE is shoveling mostly large packets, which turns out to be very manageable.

Since a FreeNAS system should have lots of CPU ("overprovisioned") anyways, it ought to be fine. FreeBSD, when properly tuned, can hit million-packet-per-second rates, and while FreeNAS may not be optimized towards that end, it'd probably be harmless to check it out.

The real problem we face today is that we went from 10Mbps in the early '90's to 100Mbps in 1995 to 1Gbps in 1999 ... each about a three or four year interval, with inexpensive commodity hardware following within half a decade. By way of comparison, 10gigE was introduced in 2002 - again the same three or four year interval - but failed to see much adoption before the late noughties. Even then, though, if we look at the 10gigE cycle as starting in 2009, we're not really seeing cheap commodity hardware five years later.

That leaves people looking for less-expensive ways to meet needs. The simple facts are that a quality 10gigE switching environment is still very pricey and even the cheapest option (is that still the Netgear?) is nearly a thousand dollars. A FreeBSD box isn't going to be able to touch the throughput of a good switching architecture anytime soon, because 10gig architectures can usually be measured in terabits and gigapackets per second, but as a quick fix to create a faux-10gigE network at modest cost, it's certainly attractive to try.

The thing I'd watch out for is if you had multiple 10gigE's bridged and you started to try doing station-to-station high volume traffic especially at a high PPS. That would be the thing that'd worry me most. Otherwise, I'd expect it to be very likely to "just work."
 

bestboy

FreeNAS Experienced
Joined
Jun 8, 2014
Messages
193
Thanks
31
#30
[...] a gigabit ethernet connection is capable of nearly 1.5 million packets per second, and you do not want to be bridging that sort of traffic load.
Oh, I think we just found a real application for jumbo frames again. They might really make a difference "soft-switching" 10GbE workloads. :)
 
Last edited:

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#31
Oh, I think we just found a real application for jumbo frames again. They might really make a difference "soft-switching" 10GbE workloads. :)
Yes, they might, but the reality of jumbo has been poor. I've waffled back and forth a bit on it over the years, but basically I've been trying to make jumbo work for 15 years and it is generally a disappointment. It's a failure at layer 3, where gear might "support" jumbo but practical issues like fragmentation and crappy support make it a royal pain. At layer 2, it is more practical, so if you have a single private storage switch, it might work very well, but as you get more complex, less so. The real problem with jumbo, though, tends to be in the ethernet adapter support available, because in many cases the way jumbo is treated by drivers is different than standard sized frames, leading to a lot of strange issues.

2014 is the year I've been actively turning off jumbo.
 
Joined
Dec 1, 2014
Messages
59
Thanks
3
#32
FreeBSD includes everything that you need to set up bridging and I don't see any of it missing in FreeNAS. You can try this:

From the command prompt, type

# ifconfig bridge0 create
# ifconfig bridge0 addm em0 addm em1

This essentially turns FreeBSD into a two port switch for em0 and em1. Configure em0 normally via the GUI and bring up em1 manually with

# ifconfig em1 up

The bridge could be set up to start on boot with some scripting. In a simple FreeNAS install without jails etc it seems likely that this should just work out of the box, and has the added advantage of not having multiple subnets and other complications. Obviously substitute your interface names for em0 and em1.

We use bridges extensively around here and they are pretty darn robust, and can support things like spanning tree as well. We have no reason to do this on a NAS but the experiment is incredibly simple. If some variation of the above works for you, we can probably work out a way to make it "stick."
Thanks Jgreco. That is exacly what I was looking for.
But I just tried to set it up and can't get it to work...

em0 - 1gb adapter connected to the switch and the rest of network.
em0 uses DHCP.
ix0 - 10gb adapter directly connected to my workstation.

I did:
# ifconfig bridge0 create
# ifconfig bridge0 addm em0 addm ix0
# ifconfig ixo up

Workstation (windows 7) started to search for network but can't connect.
I've tried to use DHCP and to set IP manually but nothing worked...
Capture.JPG
 
Joined
Dec 1, 2014
Messages
59
Thanks
3
#34
try an "ifconfig bridge0 up" if it isn't already up...
"ifconfig bridge0 up" helped! Thank you.
Everything works.
Is it possible to start this up on boot?
 

sfcredfox

FreeNAS Experienced
Joined
Aug 26, 2014
Messages
316
Thanks
43
#35
While you're waiting for jgreco to get you the answer to getting it permanent, let me ask this:

You said you had this working before in a much simpler fashion where your storage system wasn't being a network switching device for your network. This way may also work, but your system is not really a standard FreeNAS setup now. Not sure if the system will keep that during patches and upgrades?

So, what made you want to change it to this? IE - I was working before, why change it?
Not criticizing, just curious.
 
Joined
Dec 1, 2014
Messages
59
Thanks
3
#36
While you're waiting for jgreco to get you the answer to getting it permanent, let me ask this:

You said you had this working before in a much simpler fashion where your storage system wasn't being a network switching device for your network. This way may also work, but your system is not really a standard FreeNAS setup now. Not sure if the system will keep that during patches and upgrades?

So, what made you want to change it to this? IE - I was working before, why change it?
Not criticizing, just curious.
I think bridging is such a basic thing in any OS that setting it up on freenas is not really making it "non-standard", it is a FreeBSD after all. And having just one cat7 cable between workstation and "IT closet" makes everything neat :)
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#37
\You said you had this working before in a much simpler fashion where your storage system wasn't being a network switching device for your network. This way may also work, but your system is not really a standard FreeNAS setup now.
Because it's a simpler network architecture and less problematic/idiotic....?

There's a lot of reluctance around here to do anything even slightly off the beaten path, but the reality is that there's a large number of things you can do that are just fine.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#38
"ifconfig bridge0 up" helped! Thank you.
Everything works.
Is it possible to start this up on boot?
Use the startup script hook under System->Init/Shutdown Scripts. If "command" will accept commands separated by semicolons, then use "ifconfig bridge0 create; ifconfig bridge0 addm em0 addm em1 up" as a post-init command. Otherwise, use "ifconfig bridge0 create" as a pre-init command and then "ifconfig bridge0 addm em0 addm em1 up" as the post-init command. That's basically guaranteed to survive upgrades etc as long as they keep that mechanism exposed. I'm too lazy to go figure out what exactly works so I leave that as your homework.
 

sfcredfox

FreeNAS Experienced
Joined
Aug 26, 2014
Messages
316
Thanks
43
#39
Because it's a simpler network architecture and less problematic/idiotic....?.
I didn't imply that, though you may have inferred it. The reason ask these questions is to understand why people get off the 'beaten path'. Then, should I be interested in it, and can I learn it since there might be less support for it. Make sense?
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#40
I didn't imply that, though you may have inferred it. The reason ask these questions is to understand why people get off the 'beaten path'. Then, should I be interested in it, and can I learn it since there might be less support for it. Make sense?
And having multiple networks with all the caveats and gotchas that involves is more on "the beaten path?"

I don't think so.
 
Top