为什么Wi-Fi无法以2.4 Gbit / s的速度运行?


28

那么Wi-Fi运行在2.4 GHz频段,是的(新的5 GHz)吗?这意味着Wi-Fi天线每秒输出24亿个方波脉冲,对吗?

所以我想知道,为什么它不能在每个脉冲上传输数据,并不能以2.4 Gbit / s的速度发送数据?即使其中的50%是数据编码,也仍然是1.2 Gbit / s。

还是我对Wi-Fi的工作原理有误...?


8
First, the 2.4 GHz carrier is a sine wave. The data is modulated in that, probably using QPSK or QUAM, at a much lower rate. This is a very complex and broad field.
Matt Young

Ok sine wave. but still wifi speed - 300Mb/s usually? thats only 12.5% of 2.4GHz. My point is that the device is already running at 2.4GHz for the sine wave output, so can't it just modulate at that speed?
MC ΔT

3
300MB is only obtainable on the 5GHz band. 2.4GHz wifi connection supports a theoretical maximum of 54mbps per current standards.
Thebluefish

You might be interested in answers to this similar question: electronics.stackexchange.com/questions/86151/…
The Photon

13
A halfway reasonably sharp and clean 2.4 GHz square wave would require a bandwidth of at least 24 GHz.
Kaz

Answers:


49

You are confusing band with bandwidth.

  • Band - The frequency of the carrier.
  • Bandwidth - the width of the signal, usually around the carrier.

So a typical 802.11b signal may operate at a 2.4GHz carrier - the band - it will only occupy 22MHz of the spectrum - the bandwidth.

It's the bandwidth that determines the link throughput, not the band. The band is best thought of as a traffic lane. Several people might be transferring data at the same time, but in different lanes.

Some lanes are bigger, and can carry more data. Some are smaller. Voice communications is usually about 12kHz or less. Newer wifi standards allow bandwidth of up to 160MHz wide.

Keep in mind that while bandwidth and bits sent are intrinsically linked, there is a conversion there too, that's related to efficiency. The most efficient protocols can transmit over ten bits per Hz of bandwidth. Wifi a/g have an efficiency of 2.7 bits per second per hertz, so you can transmit up to 54Mbps over its 20MHz bandwidth. Newer wifi standards go up past 5 bps per Hz.

This means that if you want 2Gbits per second, you don't actually need a 2GHz bandwidth, you just need a high spectral efficiency, and today that's often given using MIMO technology on top of a very efficient modulation. For instance you can now buy an 802.11ac wifi router that supplies up to 3.2Gbps total throughput (Netgear Nighthawk X6 AC3200).


I have also always confused these topics. I understood what you mention here, but when people go on to say that the download speeds are slow because their bandwidths are limited - what link does that have to what you have posted here? What relationship can be drawn for an ISP that claims to be able to provide 54Mbps to their customers?
sherrellbc

5
Amplitude and phase shifts inherently use more bandwidth, shifting the phase slightly alters the frequency during the shift for instance as the signal is stretched or shrunk. Likewise for any sort of modulation. the only single you can transmit on a single frequency is a pure continuous sine wave. You cannot even just turn the sine wave on and off for free as the transitions require bandwidth as well.
John Meacham

1
@sherrellbc The topic you're getting into is extremely complicated and might might be better as a follow-on question, but the short answer is that you can't change amplitude or phase without effectively changing "frequency" as well. The faster you change your amplitude or phase, the more bandwidth is occupied by the change.
AndrejaKo

5
Bandwidth has changed its meaning over the years, and today is loosely defined as "the amount of information that can be conveyed." Your ISP using that word, and a radio engineer using that word are using it for different, largely unrelated, things. Advanced forms of modulation use a combination of amplitude, phase, and frequency modulation, though more often they only use amplitude and phase modulation, for instance QAM. So yes, frequency modulation is less frequently used for data transmission. 802.11b defines each channel as 22MHz, that's why. Other wifi standards use different bandwidths.
Adam Davis

1
Phase and frequency modulation are never used at the same time as the phase is the integral of the frequency. Generally, when high density is required, QAM is the solution. However, SNR is a major issue as when more bits are transmitted at the same time, it is easier for the receiver to make a mistake. This is why Wi-Fi will switch between different modulation formats depending on the link quality (it only uses QAM when the link is very good). Also, 'bandwidth' can be applied to baseband digital data as well - 54 Mbps serial data requires about 27 MHz of bandwidth (DC to 27 MHz).
alex.forencich

19

The bandwidth of the Wifi signal is nothing like 2.4GHz-it's 20 or 40MHZ.

What you are suggesting (baseband 2.4GHz) would use up the entire EM spectrum to 2.4GHz for a single channel of communication.

As you can see from this, it's already pretty well used for various other things:

enter image description here

Essentially, the 2.4GHz carrier is wobbled a little bit to send data and that allows many channels to be simultaneously transmitted while still leaving plenty of spectrum for other applications such as key fob remotes, AM/FM radio, transponders on ships and aircraft, and so on.


8
You didn't mention that there is another variable that can affect data rate, which is signal:noise ratio, which can be improved by increasing transmit power. This relationship is given by the Shannon-Hartley theorem on channel capacity and dictates that your data rate (in b/s) can be greater than your bandwidth (in Hz). However, the FCC also governs the amount of power that you can use at a transmitter within the EM spectrum, effectively limiting this factor as well.
kjgregory

1
@KGregory But the FCC does not regulate the noise floor, so in theory...
Phil Frost

1
yes, in theory...
kjgregory

12

In order for the 2.4 GHz Wi-Fi signal to avoid trampling on the 900/1800 MHz mobile phone signals, 100 MHz FM signals, and a whole vast range of other signals, there is a hard limit on how much the signal is allowed to differ from a 2.4 GHz sinewave. That's a layman way of understanding "bandwidth".

The point of having one transmitter at 2412 MHz and another at 2484 MHz, for example, is that a receiver can filter out all signals but the one it's interested in. You do this by suppressing all the frequencies outside of the band you are interested in.

Now, if you take any signal, and filter out everything above 2422 MHz and everything below 2402 MHz, you are left with something that cannot deviate all that much from a 2412 MHz sinewave. That's just how frequency filtering works.

I've somewhat expanded on this answer, adding a few images, in this answer.


9

The carrier frequency used by Wi-Fi is 2.4 GHz, but the channel width is much less than this. Wi-Fi can use 20 MHz or 40 MHz wide channels and various modulation schemes within these channels.

An unmodulated sinewave at 2.4 GHz would consume zero bandwidth, but it would also transmit zero information. Modulating the carrier wave in amplitude and frequency allows data to be transmitted. The faster the carrier wave is modulated, the more bandwidth it will consume. If you AM modulate a 2.4 GHz sinewave with a 10 MHz signal, the result will consume 20 MHz of bandwidth with frequencies ranging from 2.39 GHz to 2.41 GHz (sum and difference of 10 MHz and 2.4 GHz).

Now, Wi-Fi does not use AM modulation; 802.11n actually supports a wide range of different modulation formats. The choice of modulation format depends on the quality of the channel - e.g. the signal to noise ratio. The modulation formats include BPSK, QPSK, and QAM. BPSK and QPSK are binary and quadrature phase shift keying. QAM is quadrature amplitude modulation. BPSK and QPSK work by shifting the phase of the 2.4 GHz carrier wave. The rate at which the transmitter can change the carrier phase is limited by the channel bandwidth. The difference between BPSK and QPSK is the granularity - BPSK has two different phase shifts, QPSK has four. These different phase shifts are called 'symbols' and the channel bandwidth limits how many symbols can be transmitted per second, but not the complexity of the symbols. If the signal to noise ratio is good (lots of signal, little noise) then QPSK will perform better than BPSK because it moves more bits at the same symbol rate. However, if the SNR is bad, then BPSK is a better choice because it is less likely that the noise included with the signal will cause the receiver to make a mistake. It is harder for the receiver to figure out which phase shift a particular symbol was transmitted with when there are 4 possible phase shifts than when there are only 2.

QAM extends QPSK by adding amplitude modulation. The result is a whole extra degree of freedom - now the transmitted signal can use a range of phase shifts and amplitude changes. However, more degrees of freedom means that less noise can be tolerated. If the SNR is very good, 802.11n can use 16-QAM and 64-QAM. 16-QAM has 16 different amplitude and phase combinations while 64-QAM has 64. Each phase shift/amplitude combination is called a symbol. In BPSK, one bit is transmitted per symbol. In QPSK, 2 bits are transmitted per symbol. 16-QAM allows 4 bits to be transmitted per symbol, while 64-QAM allows 6 bits. The rate at which the symbols can be transmitted is determined by the channel bandwidth; I believe 802.11n can transmit 13 or 14.4 million symbols per second. With a 20 MHz wide bandwidth and 64-QAM, 802.11n can transfer 72 Mbit/sec.

When you add MIMO on top of that for multiple parallel streams and you increase the channel width to 40 MHz, then the overall rate can increase to 600 Mbit/sec.

If you want to increase the data rate, you can increase either the channel bandwidth or the SNR. The FCC and the specification limit the bandwidth and transmit power. It's possible to use directional antennas to improve the receive signal strength, but it's not possible to lower the noise floor - if you can figure out how to do that, you could make a hell of a lot of money.


5

Firstly, you can't just take a signal and receive it by doing a bunch of square waves in the air. You use a carrier wave (operating at a certain frequency) to modulate the data with. The idea is that you can then demodulate the data using a receiver generating a wave at the same frequency. The modulation does reduce the amount of data that may seem apparent by the raw carrier wave frequency, but without a carrier wave of some sort, you cannot recover the data as you will not be able to distinguish the data from random noise. It should be noted that the bandwidth of this carrier signal is what defines the actual speed. The bandwidth is how much the modulation technique(s) vary the actual frequency from the pure carrier frequency. Though, even assuming a perfect 1:1 ratio (which is not true as discussed above), you have to consider the overhead of the low level wireless protocol, which reduces the useful speed. Secondly, you have the overhead of the higher level protocol (usually TCP/IP stack) which itself has overhead, thereby reducing the the useful speed... Then you have possible retransmits of data that was corrupted in the transmission (again, usually handled by the higher level protocols), which even further reduces your data bandwidth. There are these and many other reasons as to why, even given an actual theoretical data bandwidth, the actual data bandwidth may be less.


TCP/IP overhead would only be 2-8% under normal circumstances, so that's not really significant in the calculation.
kasperd

2%-8% not significant for the calculation? I guess it's subjective, but that's a pretty big chunk to me. That and considering that a lot of retransmission occurs within the protocol (due to less than ideal SNR) and it can be a larger factor. Though my point was that a lot affects what one would consider ideal transmission rates (even if his assumptions on the carrier frequency was incorrect).
Jarrod Christman

When trying to understand why you are only getting one eight of the bandwidth you'd expect, then 2-8% doesn't sound significant. You'd need about 60 different factors of that size, to explain a factor of 8. But if you want to understand the full picture, you need to know that this layer exists, and contributes with a small amount of overhead. Whether it is really appropriate to count re-transmissions as overhead of the TCP layer is another question, since the re-transmissions only happen due to loss at the lower layers.
kasperd

I don't wish to belabor the point. However, I still disagree that 8% is not important. I never attempted to make the point that all of his losses were from the protocol overhead, again, merely pointing out a few different scenarios on top of his main misunderstanding that would contribute to the loss of what would seem as the actual transmission rate. I would also suggest that re-transmission is appropriate, as it is just another reason why the rate may be less than expected. Generally, the limiting factor is the bandwidth of the signal, but it's important to remember, there are others.
Jarrod Christman

2

This is a very complicated topic indeed. However, to give you one simple answer, it is because the FCC has rules in place governing the the bandwidth and transmitter power that one can use for wifi communications. This is because there are many other people trying to use the EM spectrum for various types of wireless communications (e.g. cell phones, wifi, bluetooth, am/fm radio, television, etc.). In fact, the carrier frequency (2.4GHz) has very little to do with the bandwidth of communications (or the data rate that can be achieved, for that matter).


2
While technically correct, I don't think this answers the question very well: "Why can't x carry y data?" "Because rules."
JYelton

2
That's a bit unfair IMO. Like I said it's a very complicated subject. They answer to why can't it achieve 2.4Gbps is that it can, given enough bandwidth and power. The answer to why it doesn't achieve 2.4Gbps is because it would interfere too much with others' communications if it did, thus rules were put in place to limit its capabilities.
kjgregory

2

As mentioned before, you're confusing band and bandwidth; however, none of the answers give a intuitive explanation.

The intuitive explanation could be done with speakers set. You have a high beep and a low beep indicating 1 and 0. You transport the data by alternating the high and low beeps. The frequency of the tones themselves has little (see below) to do with how fast you do the alternating between high and low beeps.

Wi-fi waves are much like sound waves. They are carrier waves: they take your block wave signal and convert it to high- and low frequency waves. The only difference is that the high and low frequency waves are very close together, and centered around 2.4GHz.

Now, for the part where you want the upper limit. Taking our 'beep' system: you can of course not change the tone frequency (band) of your beeps ten times during a single sound wave. So, there's a lower limit on when the rate of changes frequency becomes audible as distinct beeps, and when it's just a weird distorted beep. The rate at which you can change the frequency is called the bandwidth; the lower the bandwidth, the better the beeps are audible as distinct (hence the lower link speed when reception is bad).


2

Shannon's capacity theorem says that if given the received SNR in bandwidth W in additive normal noise the channel has

C=Wlog2(1+SNR)
capacity in units of bits/sec. Here capacity means that if the desired information rate over the given W is less than C then there will be an error correcting code of sufficient complexity with which one can achieve effectively zero error probability information transfer at the given SNR. This has nothing to do with carrier frequency and only indirectly is related to FCC regulations. The FCC determines how much power can be transmitted over what bandwidth, the designers decide over the complexity and technology of the transmission system and you the user end with the maximum information rate as the SNR will depend on the distance desired, the power and bandwidth the FCC allows. Over the PSTN where the system is rather static there is a modulation format that uses 1024 waveforms in 4kHz nominal bandwidth, the resulting in a theoretical 40kbit/sec information rate! If one could achieve that complexity over a mobile channel one could have ~10x20=200Mbit/sec at sufficiently high SNR, the emphasis is on the sufficiently high! The higher the carrier frequency is the higher the propagation losses are but the easier is to get the RF circuits to operate over sufficiently high but a priori given bandwidth.

1

Although there are variations in the exact way things are implemented, radio communications generally involves taking a low-frequency signal that contains information to be transmitted, and using a technique called modulation to a higher range of frequencies. It's perhaps easiest to think in terms of a "black box" which, given two signals containing various combinations of frequencies, will--for every combination of signals present in the original, the sum and difference frequencies, in proportion to the product of the strengths of the signals in the original. If one feeds in an audio signal containing frequencies in the range 0-10KHz along with a 720,000Hz sine wave [the carrier used by WGN-720 Chicago], one will receive from the box a signal containing only frequencies in the range 710,000Hz to 730,000Hz. If a receiver feeds that signal into a similar box, along with its own 720,000Hz sinewave, it will receive from that box signals in the range 0-10Khz, along with signals in the range 1,430,000Hz to 1,450,000Hz. The signals in the 0-10Khz will match the originals; those in the 1,430,000Hz to 1,450,000Hz range may be ignored.

If in addition to WGN, another station is broadcasting (e.g. WBBM-780), then the signals in the range 770,000Hz to 790,000Hz transmitted by the latter will get converted by the receiver into signals in the range 50,000Hz to 70,000Hz (as well as 1,490,000Hz to 1,510,000Hz). Since the radio receiver is designed on the assumption that no audio of interest will involve frequencies over 10,000Hz, it can ignore all of the higher frequencies.

Even though WiFi data is converted to frequencies near 2.4GHz prior to transmission, the "real" frequencies of interest are much lower. In order to avoid having WiFi transmissions interfere with other broadcasts, the WiFi transmissions must stay far enough away from the frequencies used by those other transmissions that any unwanted frequency content they may receive would be sufficiently different from what they're looking for that they'll reject it.

Note that the "black box" mixer approach to radio design is a bit of a simplification; while it would be theoretically possible for a radio receiver to use a frequency-combining circuit on an unfiltered signal and then low-pass filter the output, it's generally necessary to use multiple stages of filtering and amplification. Further, for various reasons, it's often easier for radio receivers to mix an incoming signal not with the actual carrier frequency of interest, but rather an adjustable frequency that's higher or lower by a certain amount (the term "*hetero*dyne" refers to the use of "different" frequency), filter the resulting signal, and then convert that filtered signal to the desired final frequency. Still, the key point is that the only thing which would distinguish a 1KHz audio signal transmitted by WGN (at frequencies of 719,000Hz and 721,000Hz) from 59KHz and 61Khz tones transmitted by WBBM-780 is the the fact that radio stations aren't expected to transmit audio content over 10Khz, and thus receivers will ignore anything that's much over 10Khz away.


1

The simple answer is that it can be done. You can "modulate any" carrier with any signal you want.

Assuming one is allowed to do it, the question is, how useful would it be? To answer this question we must understand what happens when one modulates a carrier. Let's take a carrier operating at 1 MHz (1,000KHz) and we modulate it with a signal that varies from 0 to 100KHz. The "mixing" of the signals generates signals in the range of 900 to 1,100 KHz. Similarly, if we use 0 to 1,000 KHz, the range of the signals generated now becomes 0 to 2,000 KHz. If we now apply these signals to an antenna, we would be transmitting signals in the range of 0 to 2,000 KHz. If two or more "nearby" persons did the same, the signals would interfere with each other and the receivers would not be capable of detecting any information. If we limit the power to the antenna, two or more individuals could "operate" with little interference, if they are sufficiently separated.

Although theoretically, one transmitter could operate using the whole EM spectrum, it is unpractical, because other people want to use it too, and just like in other situations where a resource is limited and the demand exceeds the supply, the resource must be "cut up", shared, limited, and controlled.

By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.