I think you have to reboot for the algorithm to be enabled. You can check the algorithm used with "sysctl net.inet.tcp.cc.algorithm"
[root@freenas] ~# sysctl net.inet.tcp.cc.algorithm=cubic net.inet.tcp.cc.algorithm: newreno sysctl: net.inet.tcp.cc.algorithm=cubic: No such process [root@freenas] ~# sysctl net.inet.tcp.cc.available net.inet.tcp.cc.available: newreno
Yeah, my bad, I probably meant 131072 for the recvspace/sendspace values but sometimes I'm coffee-deprived.
I have no clue what you are doing wrong, but I just did this.
They work for me. Unfortunately this system is not local, so I have no way to test wifi performance. You must reboot to apply the changes though.
So yeah, *definitely* can and does load the cubic kernel object and the settings *do* get set properly.
# sysctl -a | egrep net.inet.tcp.cc. net.inet.tcp.cc.htcp.rtt_scaling: 1 net.inet.tcp.cc.htcp.adaptive_backoff: 1 net.inet.tcp.cc.available: newreno, htcp net.inet.tcp.cc.algorithm: htcp
Alrighty. Let me know how it goes.
[root@freenas] ~# sysctl net.inet.tcp.cc.algorithm net.inet.tcp.cc.algorithm: cubic [root@freenas] ~#
Huh? Does that mean we have support for cc_cubic even though net.inet.tcp.cc.available does not mention it?
Code:# sysctl -a | egrep net.inet.tcp.cc. net.inet.tcp.cc.htcp.rtt_scaling: 1 net.inet.tcp.cc.htcp.adaptive_backoff: 1 net.inet.tcp.cc.available: newreno, htcp net.inet.tcp.cc.algorithm: htcp
# kldload /boot/kernel/cc_chd.ko
# ls /boot/kernel/ | grep cc_ cc_cdg.ko* cc_chd.ko* cc_cubic.ko* cc_hd.ko* cc_htcp.ko* cc_vegas.ko*
Oh, that's cool. I didn't know that net.inet.tcp.cc.available only shows the modules that are loaded at the time. I assumed it lists all the algorithms that can be loaded as a module.
But the actual list of available algorithm modules is different:
Code:# ls /boot/kernel/ | grep cc_ cc_cdg.ko* cc_chd.ko* cc_cubic.ko* cc_hd.ko* cc_htcp.ko* cc_vegas.ko*
to load chd, or swap in "cubic" or "cdg" or "hd" or "htcp" or "vegas" as desired.
My nas shows:
Code:[root@nas] ~# sysctl net.inet.tcp.cc.algorithm net.inet.tcp.cc.algorithm: newreno
No problems with wired speed. Wifi... could be better.
So... If I use wifi connected network, should I try using diferent CC methods to optimize speeds?
What will this do to the wired connections?
Since all the algorithms that can be loaded as a module is effectively an infinite list, that makes no sense. If I write a new collision control algorithm called "grinchy" that drops your connection if you lose a packet, how's the kernel supposed to know that this is available until it is loaded into the kernel? If the list of possible algorithms is precompiled into the kernel, that defeats the point of loadable kernel modules. The only thing the kernel can be aware of is loaded-but-inactive algorithms. So if you haven't loaded it, it isn't available. If you have, it is.
hehe.. that was not my intention. But now that you mentioned it, I'm curious to know why it's a bad idea to look for the actual modules...I recognize that there's a good likelihood that you'll suggest that it could somehow be derived from /boot/kernel but I won't go into why this isn't a good idea and won't be happening. It's out of scope for the FreeNAS forums, which merely uses FreeBSD for a platform...
I'm sorry that's a misunderstanding then. I didn't meant to say the listing is different in regards to what you and Cyberjock said.If we flatten that into a sorted list, we get
cdg
chd
cubic
hd
htcp
vegas
Which appears to me to be identical to your list of modules. How is that "different"?
Hi...
But wireless is "easier" and of course easier is the enemy of excellence. If you want excellence, you get a 24 port gigabit switch with some 10G uplinks and build yourself a lightly oversubscribed network that can meet full wirespeed demands on a bunch of ports simultaneously. Here, we have somewhere around 200 ports of switched gigabit here, and 96 of those are on a pair of 48 port switches that *each* have 4 x 10Gbit uplink. The remainder of those ports are generally deployed for things where we only need basic connectivity and even there it is common to find 2 x 1Gbit uplinks. In such an environment, if networking isn't able to deliver extremely-near-1Gbps connectivity from any port to any other port, that's a technical fail.
The wireless, that's never going to be expected to deliver speeds competitive with wired. If you actually expect this, in anything other than the most trivial wireless environment, that's just unrealistic. Even on our switched network and lightly loaded wifi here, my ping from a wifi laptop to our filers is 1-2ms, while pinging from a wired 1G host is around 0.13ms, and a 10G host is around 0.09ms. There's latency with wifi, and so it naturally works like a slower speed link in many ways. And even if you manage to get pretty good speeds with a single client, adding that second client to the mix just takes away speed and introduces contention, because the spectrum is shared.
Hi
Is there a "problem" with daisy chaining a few low end Gbit switches, for example one central and two at different workplaces/rooms (both connected to the central one)? Or is it better to pull extra cables? I know that most switches use "store and forward" , but the theoretical delay for that is very small. The 1Gbps uplinks are off course shared and "could" become a bottleneck.
I know there are important differences to manage a few central switches versus scattered desktop switches ;-(.
BTW. I'm surprised that it's possible to get 40MB/s over wifi ac. What's the distance between the Ap and laptop and are there any obstacles, a brick wall for example?
Alain
This REALLY depends on the use model. I have a lot of places we've run wire to and now years later there is more "stuff" that needs ethernet than there are jacks. In this case, what I find is REALLY cool is to deploy a Netgear GS108T v2 which is an 8 port switch that can be powered via PoE. Now, the thing is, if you're running seven devices on that switch with the 8th port as uplink, that might work dandy fine if it's just some modest bandwidth stuff, but trying to connect multiple PC's to it and then hoping they can effectively share the 1G link might not be the best idea. Or it might be fine. It really depends.
Cut through switching kinda sucks unless you absolutely positively must have lowest latency and you're willing to cope with the aggravations. My switches are capable of it and have it configured off. Take that for whatever it is worth. Store and forward is fine.
Sure it's possible to get 40MBytes/sec over AC. I just don't think it's super practical to expect high performance unless you're the only one using the AP or something like that.