It would definitely increase the workload. But in some cases it might work well. I'd definitely make sure there is no bridge already. In my case I already have a bridge0, so I'd have to create a bridge1.
If the existing bridge0 already covers one of the desired interfaces, it'd probably be best to add the new interface to the existing bridge ("ifconfig bridge0 addm em1"). The complication would seem to be understanding what was going on with the existing bridge in terms of how its plumbing works, because if the system ever tries to destroy the bridge (maybe if a jail is stopped?) that'd disconnect the bridged host.
I'd be interested in the results of that, maybe try it on a test or virtual.
I don't think I would ever want to for a few reasons: #1, it goes beyond what the product/OS was intended for, but #2, wouldn't the result of doing so throw a lot more work load on the system?
On a small network like the OP's, probably mute, but on a large network that's chatty, wouldn't that be a crap ton of extra crap the system now has to process and work into the mix? I guess if your system is overkill on CPU, less of a big deal? I just wonder if the extra work load would amount to nothing and your system doesn't care, or if you're now trading off available resources working that in with your desired storage traffic.
Since this is still related to OP, can you share any insight since you're doing it on other hosts? Is there a significant/noticeable impact? Is there any reason to worry? Or don't matter?
It is, of course, pilfering some amount of capacity from the system, and in terms of packets per second, a gigabit ethernet connection is capable of nearly 1.5 million packets per second, and you do not want to be bridging that sort of traffic load. However, even heavy production networks aren't likely to actually be needing to do that. The goal is usually to be able to forward a reasonable amount of mixed traffic, and usually a client that is maxxing out a gigE is shoveling mostly large packets, which turns out to be very manageable.
Since a FreeNAS system should have lots of CPU ("overprovisioned") anyways, it ought to be fine. FreeBSD, when properly tuned, can hit million-packet-per-second rates, and while FreeNAS may not be optimized towards that end, it'd probably be harmless to check it out.
The real problem we face today is that we went from 10Mbps in the early '90's to 100Mbps in 1995 to 1Gbps in 1999 ... each about a three or four year interval, with inexpensive commodity hardware following within half a decade. By way of comparison, 10gigE was introduced in 2002 - again the same three or four year interval - but failed to see much adoption before the late noughties. Even then, though, if we look at the 10gigE cycle as starting in 2009, we're not really seeing cheap commodity hardware five years later.
That leaves people looking for less-expensive ways to meet needs. The simple facts are that a quality 10gigE switching environment is still very pricey and even the cheapest option (is that still the Netgear?) is nearly a thousand dollars. A FreeBSD box isn't going to be able to touch the throughput of a good switching architecture anytime soon, because 10gig architectures can usually be measured in terabits and gigapackets per second, but as a quick fix to create a faux-10gigE network at modest cost, it's certainly attractive to try.
The thing I'd watch out for is if you had multiple 10gigE's bridged and you started to try doing station-to-station high volume traffic especially at a high PPS. That would be the thing that'd worry me most. Otherwise, I'd expect it to be very likely to "just work."