My DSL line downloads at 6 megabits per second. I just ran the test. This is over a pair of copper twisted wires, the same Plain Old Telephone Service (POTS) twisted pair that connected your Grandmother’s phone to the rest of the world. In fact, if you had that phone you could connect and use it today.
I can remember the old 110 bps acoustic coupler modems. Maybe some of you can also. Do you remember upgrading to 300 bps? Wow! Triple the speed. Gradually the speed increased through 1200 to 2400, and then finally, 56.6k. All over the same of wires. Now we feel short changed if were not getting multiple megabits from DSL over that same POTS line. How can we get such speeds over a system that still allows your grandmother’s phone to be connected and dialed? How did the engineers know these increased speeds were possible?
The answer lies back in 1948 with Dr. Claude Shannon who wrote a seminal paper, “A Mathematical Theory of Communication”. In that paper he laid the groundwork for Information Theory. Shannon also is recognized for applying Boolean algebra, developed by George Boole, to electrical circuits. Shannon recognized that switches, at that time, and today’s logic circuits followed the rules of Boolean Algebra. This was his Master’s Thesis written in 1937.
Shannon’s Theory of Communications explains how much information you can send through a communications channel at a specified error rate. In summary, the theory says:
- There is a maximum channel capacity, C,
- If the rate of transmission, R, is less than C, information can be transferred at a selected small error probability using smart coding techniques,
- The coding techniques require intelligent encoding techniques with longer blocks of signal data.
What the theory doesn’t provide is information on the smart coding techniques. The theory says you can do it, but not how.
In this article I’m going to describe this work without getting into the mathematics of the derivations. In another article I’ll discuss some of the smart coding techniques used to approach channel capacity. If you can understand the mathematics, here is the first part of the paper as published in the Bell System Technical Journal in July 1948 and the remainder published later that year. To walk though the system used to fit so much information on a twisted copper pair, keep reading.
Information Theory in a Nutshell
Let’s start with a diagram to understand the basic problem. We have information sent from a source by a transmitter through a channel. The channel is disrupted by a noise source. A receiver accepts the signal plus noise and converts it back into information. Shannon determined the maximum amount of information you can reliably move through the channel. The maximum rate is determined by the bandwidth of the channel and the amount of noise, and only those two values. We can see intuitively that bandwidth and noise would be limiting factors. What’s amazing is that they are the only two factors.
It is obvious that a channel with more bandwidth will pass more data than a smaller one. The bigger a pipe, the more can be pushed through it. This statement holds true of all communication channels whether radio frequency (RF), fiber optic, or a POTS twisted pair of copper wires.
The bandwidth of a channel is the difference between the highest and lowest frequencies that will pass through the channel. For example, a POTS voice channel has a low frequency of 400 Hz and a high frequency of 3400 Hz. This provides a bandwidth of 3000 Hz. (Some references say the low frequency is 300 Hz which provides a 3100 Hz bandwidth.) In reality, channel limits are not sharp cutoffs. Shannon used a 3 decibel drop in signal strength to determine the limits.
A twisted pair has a larger bandwidth than 3000 Hz so they are not the reason for the narrow POTS bandwidth. Phone companies impose this limited bandwidth so they can frequency multiplex long distance calls on a single line. The bandwidth limit is acceptable because human speech is intelligible using this band of frequencies.
Shannon’s theory built on the work of Harry Nyquist and Ralph Hartley. Nyquist, analyzing telegraph systems, took the first steps toward determining channel capacity. He determined that the maximum pulse rate of a channel (PDF) is twice the bandwidth. This is the Nyquist Rate (if you beg to differ, please see my note at end of article about ‘Nyquist’ terminology). In our 3000 Hz POTS channel we can transmit 6000 pulses per second, which is totally counterintuitive.
Let’s send a 3000 hz sine wave signal through the channel. We somehow chop off all the negative lobes of the sine wave. If we designate the remaining lobes as 0s and the missing lobes as 1s we are sending 6000 bits through the channel. Nyquist discussed pulses but we would now call them symbols in communications work. The number of symbols per second is a baud, named after Émile Baudot who created one of the first digital codes. It is incorrect to say “baud rate”, since by definition it is a rate.
The formula for the Nyquist Rate is:
C is the channel capacity in symbols per second, or baud
B is the bandwidth of the channel in hertz
Hartley’s contribution extended this to use more than two signal levels, or multilevel encoding. He recognized that the receiver determined the number of levels that could be detected, independent of all other factors. In our example of the POTS channel, you might use the amplitude of the sine wave to determine multiple levels. With 4 different levels we can send two bits for every symbol. With multilevel encoding the bit rate for a noiseless channel is given by:
C is the channel capacity in bits per second
B is the bandwidth of the channel in hertz
M is the number of levels.
It is obvious that noise is going to limit the amount of data that can be passed. This is analogous to the rough inner surface of a pipe causing friction and slowing the passage of material. The more noise, the slower the error-free data rate. Here’s Shannon’s formula:
C is the channel capacity in bits per second (bps)
B is the bandwidth of the channel in hertz
S is the average received signal power over the bandwidth
N is the average noise or interference power over the bandwidth
S/N is the signal-to-noise ratio (SNR)
Shannon’s result is in bits because he defined ‘information’ using bits. Consider a deck of 52 playing cards with 4 suits and 13 cards in each suit. It takes 2 bits (00b to 11b) to represent the four suits and 4 bits (0000b to 1101b) to represent the cards. In total, it takes 6 bits to represent a deck of cards.
A deck has more data that could be transmitted: color as 1 bit, face card or number card as 1 bit, face card gender as 1 bit, etc. These are redundant since complete information is already available in 6 bits. Based on Shannon’s work this is not information any more than telling someone there were new articles on Hackaday. There are always new articles so that’s not information. Reporting a day without articles? That’s information.
The probability that a bit is received correctly or incorrectly is determined by the Signal to Noise ratio. A higher noise level means fewer error free bit transfers. This relates directly to Hartley’s recognition that the bit rate is determined by the ability of the receiver to correctly detect the multiple signal levels. Errors start to occur when the noise level exceeds the receiver’s ability to differentiate good and bad symbols. The impact of noise on a symbol may actually depend on the specific symbol. Low-level amplitude modulated signals can be overwhelmed by the noise while symbols of higher amplitude are okay. Other modulation methods are impacted by noise in other ways.
The Phone Company Cheats
Earlier I asked how DSL works through the same line that handled your Grandmother’s phone. The basic answer is the phone companies cheat. Seriously, the line from your house to the first phone company location is unfiltered allowing DSL to use the full bandwidth of the twisted pair. At the phone company end they split the voice and DSL signals. The voice is limited to the 3000 Hz and the DSL left at its full bandwidth. That little filter you have at your house is a bandpass filter to block the DSL signals from your handset.
I mentioned that Shannon’s theory doesn’t answer the question of how to achieve high throughput. In the next article we’ll look at some of these techniques, which include error detection and correction on a noisy channel.
A Comment on Nyquist Terms
There is conflict among references on the meaning of Nyquist Rate versus Limit or Sampling Theorem. The confusion highlights how much Nyquist contributed since if the amount were less there’d be no confusion. A Wikipedia article might not be definitive but the one provided for Nyquist Rate explains the two conflicting meanings of the term.