An impressive attempt to summarise Wi-Fi which is a very deep topic. However I think the executive summary already missed the most critical thing about Wi-Fi:
only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
It's a shared medium and it's not even half duplex, unlike the dedicated full duplex you would typically get with an ethernet cable to a switch port.
The fact that Wi-Fi achieves what it does with this limitation, and how it co-ordinates the dance of multiple unknown clients using the same medium - and in the presence of other RF technologies to boot - is indeed an incredible technology story, but this achilles heel is the single most defining thing about Wi-Fi performance.
rayiner 5 hours ago [-]
> only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
That’s not correct. You and your neighbor can use the same channel at the same time. On your network, the transmissions of the other network appear will appear as noise. As long as the other devices are far enough away, however, your devices will still be able to make out their own signal.
niobe 4 hours ago [-]
This is a common misconception.. you and your neighbour can configure the same channel, you cannot successfully transmit at the same time on the same channel within range. Nor can you and your own AP successfully transmit at the same time on the same channel.
When you and your neighbour _appear_ to be transmitting at the same time, each adapter is actually spending most of it's time waiting for a clear medium and for various backoff timers to expire before attempting to transmit.
"Appear as noise" is not defined for Wi-Fi adapters. There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted. They just wait for a retransmit. Senders ordinarily wait a certain time to receive an acknowledgement, and if they don't, the start the transmit wait cycle again. But they often then reduce the data rate to increase the odds of a successful transmission.
I'm glossing over some complexity here, because there's a sender and receiver to consider, and each has a different view of the RF environment, but the point is always correct when all transmitters and receivers (lets say the 2 APs and each has 1 client) are in audible range of each other. And this is most of the time. Note that "audible range" (where the signal is such that the medium is deemed as busy by the adapter) is much larger than the "usable range" (where data can be transmitted at reasonable speeds). So transmitters create interference in a much larger area than they actually operate in.
That means your neighbour transmitting at 6Mbps to his AP will indeed degrade the performance of your client who wants to transmit at 600Mbps because your client has to wait ~100 times longer for a clear medium.
rayiner 3 hours ago [-]
> There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted.
That's not correct. WiFi is "listen before talk." Radios listen to the channel, trying to decode preambles from other networks, before transmitting. In that process, they can detect other signals well below the threshold where they'll consider the medium in use (the CCA threshold). If you have an otherwise clean channel, the noise floor might be -95 dBm. Radios typically can decode the preambles 3-4 dB above the noise floor. Conventionally, the WiFi standards set the CCA threshold at -82 dBm. So the radio can "hear" a lot of signals that won't cause it to trigger collision avoidance. More recent standards allow using a CCA threshold as high as -62 dBM under certain circumstances to facilitate spatial reuse: https://arista.my.site.com/AristaCommunity/s/article/Spatial....
Also, what the Wifi standards do is less aggressive than what radios could do. The CCA thresholds are set to facilitate orderly use of the spectrum--they're not physical limits. To receive a transmission, you just need sufficient signal-to-noise ratio. An adjacent network transmission raises the noise floor, but if your radio is close enough to your AP, you might still have sufficient SNR.
Yes, and before that MU-MIMO is also an improvement to the problem. Still only 1 transmitter at a time, but multiple receivers.
Onavo 7 hours ago [-]
Well the newer WiFi standards on 6Ghz support a lot more channels. Not a perfect work around by any means but it does significantly reduce congestion.
niobe 7 hours ago [-]
Yes, that helps quiet a lot in practice because in most places there's limited "frequency-domain" capacity (i.e. free channels) but plenty of "time-domain" capacity, (i.e. free air-time). So even if you are sharing a channel with 4 other APs and their users, everybody may subjectively feel the network is fast. When chopping up the time domain into nanoseconds there's just a lot of idle time available, even if clients are pulling down files at 600Mbps.
But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded. It's like a linear hack to an exponential problem. It seems to work at first, but under very high load conditions performance still degrades ever faster until it falls off a cliff. Then there's all sorts of complex dynamic behaviour like the hidden node problem to add to this, but it all boils down to needing air-time and SNR.
rayiner 5 hours ago [-]
> But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded.
Yeah 6Ghz freq doesn't have DFS channels which remove a lot of usable channels for 5Ghz. Unfortunately it'll be a while until most devices support 6Ghz.
KingMachiavelli 8 hours ago [-]
I'd like to understand why the WiFi spec developed so slowly from G to N and finally to AC but now it's seems like a new version is released every other year yet many of the features/extensions are poorly implemented or have nearly 0 real world improvement.
niobe 7 hours ago [-]
I would agree with that. G to N was perhaps the most critical move in Wi-Fi because it included MIMO. You can think of this as unwanted signal echoes and reflections being switched from a liability to a benefit. Heck, I _still_ run WiFi-4 networks and they perform very well. WiFi-5 was an incremental upgrade, with many experimental features that barely used in practice.
802.11 is in general a vast swag of cool tricks, and when enough ideas are thrown at a wall, many do end up sticking, but for the most part the benefits are cumulative. MIMO being one major exception.
Jach 4 hours ago [-]
Speaking just on timelines (rather than actual underlying innovations or improvements), 802.11 was in 1997, next in 1999, G in 2003, then a 6 year gap to N in 2009, 4 year gap to AC in 2013, 8 year gap to wifi 6 in 2021, wifi 7 in 2024 (though apparently buyer beware), and wifi 8 expected (according to the article) in 2028. Doesn't seem too rapid? The 8 year gap is the weird one out.
I think part of it is that if there isn't a regular and practiced process for bumping standards, then gaps between revisions can grow quite large and stagnation can set in, and if there are any significant improvements it'll take longer for them to come to fruition than if there were regular revisions that are only modest most of the time. Looking at a few other things that come to mind: USB had an 8 year gap between 2 and 3 as well, PCIe had a 7 year gap between 3 and 4 (albeit while they only had a 3 year gap between the specification for 5 to 6, it still took 3 more years (2025) for the first pcie6 devices, and I still can't buy a consumer-level pcie6 motherboard, it's a separate mess), C++ had an 8 year gap between C++03 and C++11, Java had a 5 year gap between 6 and 7 (and another 3 years after 7 to get to Java 8); all of these things now have more rapid cycles.
9x39 5 hours ago [-]
I'm not a hardware guy, but my guess would be evolution of radio transceiver tech in the cell space drives improvements downstream in wifi. Better transceivers can pull quality signals from what was noise generations past, its not magic of course, but the speed transceivers can run over copper cable goes up similarly. 1Gbps was a fast cable a while ago, and now we're doing hundreds of gigabits commonly.
Another thing is that features like beamforming and higher QAM, let's say, are going to matter more in ideal scenarios where APs are in their sweet spot relative to clients, and you get to take advantage of high SNRs. Is that going to help when someone buys a Netgear Wifi 7 AP only to flip it upside down behind the couch in their apartment in an environment where 2.4 and even 5 ghz are basically gone from all their neighbors' use? Still, faster data rates mean clients get on and off the air quicker overall, saving airspace and battery if applicable. So, I think there's mainstream and highly specialized features rolling out simultaneously.
dylan604 7 hours ago [-]
Does any of it have to do with the spectrum becoming available? After 2.4GHz and 5GHz, I have no idea what else the latest/future gens of WiFi are using. As some tech like 2G is no longer in operation, that spectrum was opened up. There are other frequencies that have become available where operating the older equipment that used to operate there is a big no-no now. There was a frequency range used by old wireless microphone systems that are banned at locations.
Just taking a swing at it, but I don't play that sport so probably a big whiff
ssl-3 5 hours ago [-]
In regulatory regions where it is usable, Wifi 6 (802.11ax) added some 6GHz channels. Wifi 6e extended that to roughly the entire 6GHz band, for ~1GHz of contiguous RF bandwidth in that area alone.
The "old" cellular bands aren't generally open, at least in the States. We tend to use them for newer licensed stuff in cellular-land instead of the old licensed stuff we used to do. (Old modulation techniques die out and get replaced, but licensed RF bandwidth is still licensed RF bandwidth.)
crims0n 8 hours ago [-]
Surely some of that was need. When G was dominant from around 2004-2009 the theoretical maximum was 54mbps… most people were still on DSL or cable at the time, often capping out way below that.
Avamander 7 hours ago [-]
It's all very proprietary and the tooling is ass, there's a lot of wasted effort creating and testing out the same stuff. Bluetooth is just as horrible for the same reasons.
anyfoo 7 hours ago [-]
> Wi-Fi signal strength decreases at an exponential rate as you move further away from a router.
This is surprising to me. I'd have guessed it decreases quadratically (i.e. due to the inverse square law), not exponentially.
The paragraph below seems to contain an explanation, but I don't really understand it (namely because I don't know what that percentage "Coverage" column actually means, or what we mean with "the total distance at each QAM step").
niobe 6 hours ago [-]
So that table is using distance as a proxy for signal to noise ratio. SNR is what really matters.
Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.
However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.
Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.
But the sticker speed is what sells..
cortesoft 5 hours ago [-]
There are a lot of people who are the only ones using their Wi-Fi, so they probably don't care about the performance for anyone else
niobe 4 hours ago [-]
But this is the point. What your neighbour's are doing greatly affects the performance of your network.
If you have a good connection and are successfully able to transmit packets to your AP at 600Mbps, and your neighbour has a poor connection and is transmitting at 6Mbps to his AP at that moment, you literally have to wait ~100 times as long for a free medium before you can attempt to transmit. And that's for every single frame. Then you have to hope his client is well-behaved enough not to transmit while you are transmitting. Otherwise you end up having to wait again and retransmit anyway.
You might not notice this with only 2 clients. It might be the difference between a 80MBps and a 50MBps download for example. But it decays exponentially with the number of clients.
I know what "exponentially" means, I know what "quadratically" means (and how it's not exponentially), and I know the inverse square law. Hence my question why the article claims "signal strength" decreases exponentially, when the raw power received by an antenna definitely decreases quadratically, not exponentially. That's just physics. But there might be some convoluted thing about stepping down symbol rate which affects throughput (which I guess could be colloquially called "signal strength" if I squint really hard) that I don't understand here.
wonnage 7 hours ago [-]
yeah, it's pretty common to refer to x^2 as exponential colloquially since there's A. an exponent B. a single term for all values (vs. quadratic, cubic, quartic...)
But you're technically correct!
anyfoo 7 hours ago [-]
I'm actually not sure that they don't actually mean exponentially. There's something about not only increasing the distance, but potentially also the modulation (and thus the symbol rate) stepping down, which maybe in total causes the decline to be ~exponential? But it's not clear to me at all. That's why I ask, I have a hard time parsing it.
But then again, the sentence uses the term "signal strength", not "throughput", so that would suggest quadratically. But I guess "signal strength" could be meant colloquially and mean more than just the raw signal power received by the antenna, here.
It's all very fuzzy to me, as it stands.
amluto 7 hours ago [-]
Do you also think that f(x) = x^1 is exponential? How about f(x) = x^0?
anyfoo 7 hours ago [-]
Kind of irrelevant, because you could also ask "Do you also think that f(x) = x^1 is polynomial? How about f(x) = x^0?" The distinction was clearly between polynomial (specifically quadratic) and exponential, leaving those trivial cases out.
oofabz 3 hours ago [-]
I recently got a Grandstream GWN7615 access point to add coverage on the other side of the house from the main router. It does not meet the minimum spec listed in this article but for more modest requirements I think it's an excellent value. You can get one for well under $100. It is WiFi 5, 3x3 MIMO, and you don't need any cloud account to manage it.
Normal_gaussian 7 hours ago [-]
Today I set up a NWA210BE (Zyxel) to replace a unifi 6+ AP; I bought it second hand and my key metrics were: 4x4 MIMO, available used/discounted, current gen, fully functional standalone mode.
The 4x4 makes all the difference. Sitting in my car the 6+ would fight with my 4G for internet and cause maps to be super slow; now I'm off the property before its unusable.
I had intended to put APs in multiple rooms, but there doesn't seem like much point now.
bjoli 40 minutes ago [-]
Inwas about to buy a pair of those, but then I saw the new mikrotik wifi 7 router (and probably upcoming access point) with thread radio.
Now every other brand is dead to me.
bityard 5 hours ago [-]
Interesting...
I have a Netgear WAX218, one of the last cheap business-class APs I could find that don't require a cloud service to manage. WAY better than the pro-sumer wifi routers I was running before in access point mode. I'll have to look into Zyxel offerings a bit more when I'm ready to replace my Netgear.
monk_grilla 7 hours ago [-]
Anyone know of a similarly excellent resource for understanding wired networking? CAT specifications, how to pick high quality switches/routers etc.?
lake_trade 3 hours ago [-]
Beej's guide will help you with understanding networking overall, I don't think it would help you choosing switches/routers specifically.
Havoc 8 hours ago [-]
Nice detailed article!
Finding it increasingly difficult to avoid bottlenecks though. Even with wifi 7 I still get 1.3 on my mac and 0.5 on iphone. More than enough realistically, but upstream internet is 1.7 so tiny bit unfortunately
Think I'm just going to wire the place with 10 gig fiber
>The speed advantages that Access Points have over mesh systems will become much more obvious with Wi-Fi 7.
From what I've read mesh devices generally can detect when they've got wired backhaul so they can stay in mesh mode for the clean handovers while not relying on it for actually moving data
anyfoo 8 hours ago [-]
Due to boring circumstances outside of my control, I have to use WiFi for the most part, so I've got quite some experience with making it run optimally (or rather, as optimally as I managed to, not as optimally as I would like it to).
And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps. And still be on channels with little interference. (DFS helps if you're not near radar, which intentionally causes you to get kicked off those channels and lose connection entirely.) And even then you might have to mess about a lot with positioning, because of reflections and generally multipath propagation.
I'd say it's not worth the headache. I would love to lay down Ethernet cable, even if it was just cabling only suitable for 1 Gbps (for which there's no good reason to, might as well do 10 Gbps).
But yeah, any mesh system worth its salt figures out the topology and absolutely favors wired links over WiFi for the back haul. Anything else wouldn't make any sense at all, there is basically no situation where you'd prefer an RF channel over a wire, unless the wire is maybe made of wet string.
walrus01 7 hours ago [-]
> And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps
If one considers that the higher speeds in 802.11ac and 802.11be require 256QAM modulation or better, this is completely expected (assuming 5 GHz band of course, which doesn't go through material very well at all). If you've sen a live eyeball chart of a 256QAM or 1024QAM constellation on test equipment for clear-air microwave link purposes, and seen how quickly it can degrade or get fuzzy if there's anything in the way of the link, it becomes more readily apparent. MCS levels 8 and onwards here:
Have you thought about using powerline devices? I’ve successfully used them in place where running my own cable wasn’t a possibility, and WiFi wasn’t cutting it.
My house is built out of reinforced concrete, so wireless signals reach almost nowhere. I got Ethernet put into the living room and bedroom and put in 2.5 Gbps USB ethernet dongles on powered hubs, so when I plug into my phone/laptop to charge they get wired ethernet automatically.
walrus01 8 hours ago [-]
how many spatial streams are you using (2x2, 3x3, etc) and are you using an 80 or 160 MHz channel?
If you have a set of full capability 802.11be clients you'll see the best performance with a 3x3 AP and 160 MHz channels.
8 hours ago [-]
Neywiny 9 hours ago [-]
Good to see the subjective adjectives in the RF world are here too. Except they're not the same ordering, as EH is before UH for WiFi but after in RF
WillPostForFood 7 hours ago [-]
I was on top of G, started to lose track after N.
Dylan16807 5 hours ago [-]
I hate how they did this big rebrand to simplify things and then immediately ruined it with 6e and 7.
Okay, we have wifi 6, now we're adding 6GHz. How do you know if you have 6GHz? You check if it says 6...e. And is wifi 7 an upgrade to that? Lol who knows, depends on the individual device specs. Check if it says tri-band, that will tell you it supports 6GHz... OR that it can support two simultaneous networks on one of the other frequencies.
ibatindev 7 hours ago [-]
Once again, IEEE 802.11ah -Wi-Fi HaLow-, completely forgotten. This one would be perfect for all the lights/sensors.
walrus01 7 hours ago [-]
Latest-gen zigbee stuff and zwave 800 seems to have already thoroughly occupied that niche for a great deal of home and office automation equipment.
Avamander 7 hours ago [-]
There aren't any usable chipsets with usable drivers for 802.11ah unfortunately.
blindriver 6 hours ago [-]
One thing that wasn't mentioned is that the more APs you have, the worst off your life gets. That's because the way clients connect to a particular AP is done client-side and you have no control over it or visibility. So, no matter how you fiddle with it, your client may connect to the AP that is 40 feet away and on another floor rather than the one that is 10 feet away with a perfect line of sight. And you won't know why. This is the problem I had with my house and had to decrease the number of APs to get over better reliability and performance.
jauntywundrkind 6 hours ago [-]
There's band steering. You absolutely do have control, if you opt to do so.
Multiple APs are really nice because you can turn down the AP power, ideally, as you add more stations. Unfortunately I don't think you can tell a client to be quieter though; someone's laptop can be at 200mW tearing the hell out of the spectrum when everyone else is nicely conversing at 10-20mW.
toast0 3 hours ago [-]
My experience with DAWN wasn't great. Some of my clients don't like the extensions you need, so I had to go back to no roaming extensions and just hoping clients make good decisions and tuning ap power levels to help.
Might try it again though, I'd love for it to work. And I was also dealing with some baseline wifi instability that I think firmware updates has resolved.
mc32 5 hours ago [-]
From what I hear, Macs are stickier and Windows clients more promiscuous. So a Mac will stick with an AP further out when you have one near, on the other hand a Windows client can go back and forth between APs -which can sometimes be a problem too.
Rendered at 07:29:24 GMT+0000 (Coordinated Universal Time) with Vercel.
only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
It's a shared medium and it's not even half duplex, unlike the dedicated full duplex you would typically get with an ethernet cable to a switch port.
The fact that Wi-Fi achieves what it does with this limitation, and how it co-ordinates the dance of multiple unknown clients using the same medium - and in the presence of other RF technologies to boot - is indeed an incredible technology story, but this achilles heel is the single most defining thing about Wi-Fi performance.
That’s not correct. You and your neighbor can use the same channel at the same time. On your network, the transmissions of the other network appear will appear as noise. As long as the other devices are far enough away, however, your devices will still be able to make out their own signal.
When you and your neighbour _appear_ to be transmitting at the same time, each adapter is actually spending most of it's time waiting for a clear medium and for various backoff timers to expire before attempting to transmit.
"Appear as noise" is not defined for Wi-Fi adapters. There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted. They just wait for a retransmit. Senders ordinarily wait a certain time to receive an acknowledgement, and if they don't, the start the transmit wait cycle again. But they often then reduce the data rate to increase the odds of a successful transmission.
I'm glossing over some complexity here, because there's a sender and receiver to consider, and each has a different view of the RF environment, but the point is always correct when all transmitters and receivers (lets say the 2 APs and each has 1 client) are in audible range of each other. And this is most of the time. Note that "audible range" (where the signal is such that the medium is deemed as busy by the adapter) is much larger than the "usable range" (where data can be transmitted at reasonable speeds). So transmitters create interference in a much larger area than they actually operate in.
That means your neighbour transmitting at 6Mbps to his AP will indeed degrade the performance of your client who wants to transmit at 600Mbps because your client has to wait ~100 times longer for a clear medium.
That's not correct. WiFi is "listen before talk." Radios listen to the channel, trying to decode preambles from other networks, before transmitting. In that process, they can detect other signals well below the threshold where they'll consider the medium in use (the CCA threshold). If you have an otherwise clean channel, the noise floor might be -95 dBm. Radios typically can decode the preambles 3-4 dB above the noise floor. Conventionally, the WiFi standards set the CCA threshold at -82 dBm. So the radio can "hear" a lot of signals that won't cause it to trigger collision avoidance. More recent standards allow using a CCA threshold as high as -62 dBM under certain circumstances to facilitate spatial reuse: https://arista.my.site.com/AristaCommunity/s/article/Spatial....
Also, what the Wifi standards do is less aggressive than what radios could do. The CCA thresholds are set to facilitate orderly use of the spectrum--they're not physical limits. To receive a transmission, you just need sufficient signal-to-noise ratio. An adjacent network transmission raises the noise floor, but if your radio is close enough to your AP, you might still have sufficient SNR.
OFDMA on wifi7/802.11be: https://blogs.cisco.com/networking/wi-fi-7-mru-ofdma-turning...
But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded. It's like a linear hack to an exponential problem. It seems to work at first, but under very high load conditions performance still degrades ever faster until it falls off a cliff. Then there's all sorts of complex dynamic behaviour like the hidden node problem to add to this, but it all boils down to needing air-time and SNR.
You’re overlooking the spatial dimension: https://en.wikipedia.org/wiki/Spatial_multiplexing
802.11 is in general a vast swag of cool tricks, and when enough ideas are thrown at a wall, many do end up sticking, but for the most part the benefits are cumulative. MIMO being one major exception.
I think part of it is that if there isn't a regular and practiced process for bumping standards, then gaps between revisions can grow quite large and stagnation can set in, and if there are any significant improvements it'll take longer for them to come to fruition than if there were regular revisions that are only modest most of the time. Looking at a few other things that come to mind: USB had an 8 year gap between 2 and 3 as well, PCIe had a 7 year gap between 3 and 4 (albeit while they only had a 3 year gap between the specification for 5 to 6, it still took 3 more years (2025) for the first pcie6 devices, and I still can't buy a consumer-level pcie6 motherboard, it's a separate mess), C++ had an 8 year gap between C++03 and C++11, Java had a 5 year gap between 6 and 7 (and another 3 years after 7 to get to Java 8); all of these things now have more rapid cycles.
Another thing is that features like beamforming and higher QAM, let's say, are going to matter more in ideal scenarios where APs are in their sweet spot relative to clients, and you get to take advantage of high SNRs. Is that going to help when someone buys a Netgear Wifi 7 AP only to flip it upside down behind the couch in their apartment in an environment where 2.4 and even 5 ghz are basically gone from all their neighbors' use? Still, faster data rates mean clients get on and off the air quicker overall, saving airspace and battery if applicable. So, I think there's mainstream and highly specialized features rolling out simultaneously.
Just taking a swing at it, but I don't play that sport so probably a big whiff
The "old" cellular bands aren't generally open, at least in the States. We tend to use them for newer licensed stuff in cellular-land instead of the old licensed stuff we used to do. (Old modulation techniques die out and get replaced, but licensed RF bandwidth is still licensed RF bandwidth.)
This is surprising to me. I'd have guessed it decreases quadratically (i.e. due to the inverse square law), not exponentially.
The paragraph below seems to contain an explanation, but I don't really understand it (namely because I don't know what that percentage "Coverage" column actually means, or what we mean with "the total distance at each QAM step").
Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.
However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.
Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.
But the sticker speed is what sells..
If you have a good connection and are successfully able to transmit packets to your AP at 600Mbps, and your neighbour has a poor connection and is transmitting at 6Mbps to his AP at that moment, you literally have to wait ~100 times as long for a free medium before you can attempt to transmit. And that's for every single frame. Then you have to hope his client is well-behaved enough not to transmit while you are transmitting. Otherwise you end up having to wait again and retransmit anyway.
You might not notice this with only 2 clients. It might be the difference between a 80MBps and a 50MBps download for example. But it decays exponentially with the number of clients.
Because the variable is the base, not exponent.
But you're technically correct!
But then again, the sentence uses the term "signal strength", not "throughput", so that would suggest quadratically. But I guess "signal strength" could be meant colloquially and mean more than just the raw signal power received by the antenna, here.
It's all very fuzzy to me, as it stands.
The 4x4 makes all the difference. Sitting in my car the 6+ would fight with my 4G for internet and cause maps to be super slow; now I'm off the property before its unusable.
I had intended to put APs in multiple rooms, but there doesn't seem like much point now.
Now every other brand is dead to me.
I have a Netgear WAX218, one of the last cheap business-class APs I could find that don't require a cloud service to manage. WAY better than the pro-sumer wifi routers I was running before in access point mode. I'll have to look into Zyxel offerings a bit more when I'm ready to replace my Netgear.
Finding it increasingly difficult to avoid bottlenecks though. Even with wifi 7 I still get 1.3 on my mac and 0.5 on iphone. More than enough realistically, but upstream internet is 1.7 so tiny bit unfortunately
Think I'm just going to wire the place with 10 gig fiber
>The speed advantages that Access Points have over mesh systems will become much more obvious with Wi-Fi 7.
From what I've read mesh devices generally can detect when they've got wired backhaul so they can stay in mesh mode for the clean handovers while not relying on it for actually moving data
And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps. And still be on channels with little interference. (DFS helps if you're not near radar, which intentionally causes you to get kicked off those channels and lose connection entirely.) And even then you might have to mess about a lot with positioning, because of reflections and generally multipath propagation.
I'd say it's not worth the headache. I would love to lay down Ethernet cable, even if it was just cabling only suitable for 1 Gbps (for which there's no good reason to, might as well do 10 Gbps).
But yeah, any mesh system worth its salt figures out the topology and absolutely favors wired links over WiFi for the back haul. Anything else wouldn't make any sense at all, there is basically no situation where you'd prefer an RF channel over a wire, unless the wire is maybe made of wet string.
If one considers that the higher speeds in 802.11ac and 802.11be require 256QAM modulation or better, this is completely expected (assuming 5 GHz band of course, which doesn't go through material very well at all). If you've sen a live eyeball chart of a 256QAM or 1024QAM constellation on test equipment for clear-air microwave link purposes, and seen how quickly it can degrade or get fuzzy if there's anything in the way of the link, it becomes more readily apparent. MCS levels 8 and onwards here:
https://en.wikipedia.org/wiki/Wi-Fi_7
"Clean" eyeball example of 256QAM: https://www.everythingrf.com/community/what-is-256-qam-modul...
examples of "fuzzy qam" in 16QAM, same principle applies to denser QAM
https://www.researchgate.net/figure/Typical-eye-diagram-Symb...
https://www.hp.com/us-en/shop/tech-takes/what-is-a-powerline...
If you have a set of full capability 802.11be clients you'll see the best performance with a 3x3 AP and 160 MHz channels.
Okay, we have wifi 6, now we're adding 6GHz. How do you know if you have 6GHz? You check if it says 6...e. And is wifi 7 an upgrade to that? Lol who knows, depends on the individual device specs. Check if it says tri-band, that will tell you it supports 6GHz... OR that it can support two simultaneous networks on one of the other frequencies.
On openwrt, DAWN or usteer can both help your APs to get sounding maps from clients and to tell them which AP to join. Looking at the sounding maps is very fun data to see: highly tecommend! The settings aren't the world's greatest but they are pretty good starts! https://github.com/berlin-open-wireless-lab/DAWN https://openwrt.org/docs/guide-user/network/wifi/dawn
Multiple APs are really nice because you can turn down the AP power, ideally, as you add more stations. Unfortunately I don't think you can tell a client to be quieter though; someone's laptop can be at 200mW tearing the hell out of the spectrum when everyone else is nicely conversing at 10-20mW.
Might try it again though, I'd love for it to work. And I was also dealing with some baseline wifi instability that I think firmware updates has resolved.