Recently, the media was filled with articles about how turning on 5G transmissions in the C-band could make US planes fall out of the sky. While the matter was ultimately resolved without too much fuss, this conflict may have some long-term consequences, with the FCC looking to potentially address and regulate the root of the problem, as reported by Ars Technica.
At the heart of the whole issue is that while transmitters are regulated in terms of their power and which part of the spectrum they broadcast on, receivers are much less regulated. This means that in the case of the altimeters in airplanes for example, which use the 4.2 GHz – 4.4 GHz spectrum, some of their receivers may be sensitive to a part of the 5G C-band (3.7 GHz -3.98 GHz), despite the standard 200 MHz guard band (upped to 400 MHz in the US) between said C-band and the spectrum used by altimeters.
What the FCC is currently doing is to solicit ways in which it could regulate the performance and standards for receivers. This would then presumably not just pertain to 5G and altimeters, but also to other receivers outside of avionics. Since the FCC already did something similar back in 2003 with an inquiry, but closed it back in 2007 without any action taken, it remains to be seen whether this time will be different. One solid reason would be the wasted spectrum: a 400 MHz guard band is a very large chunk.
Thanks to [Chris Muncy] for the tip.
This is an FAA problem, not an FCC problem. If your receiver predictably misbehaves in a non-safety-critical application, you should send it back, leave a bad review, and if it’s a serious enough problem, sue for damages. If your receiver predictably misbehaves in a safety-critical application, the safety-critical application as a whole has a review and/or standards problem.
Agreed. Isn’t this what 15.9.A.3 – stating that devices “must accept” interference – means? Obviously it doesn’t directly apply, but it still doesn’t make sense to say “here’s your band, but if anyone else’s band interferes with yours, that’s _their_ fault, not yours”. So how is it that, when FAA altimeters fail from interference from a completely different band, there’s _any_ question as to who’s done did stupid?
Did really nobody consider “hmm, we weren’t given this band – maybe something will use it in the future, so let’s add a filter”, or was it brought up and some moron replied “that’ll never happen”?!
@Matt said: “This is an FAA problem, not an FCC problem.”
Agreed; but these days in Washington, D.C. any opportunity to grow their power over the people must be exploited.
S.O.P. as they say in DC.
S.O.P. > Standard Operating Procedure – I presume…
I would think it is the FAA who should resolve this issue. If I sell airline altimeters that don’t meet real life requirements it is my fault – unless, of course the only federal standards are limited to the colour of the paint on the altimeter front panel or some such flummery.
Haven’t bought a radar altimeter lately? They cost as much as a house and are a pita to install and calibrate. Multiply that by a hundred or a thousand aircraft.
However the bright side is airlines only need it when they are landing in zero vis conditions and cant actually see landmarks.
I understand that they are expensive. I understand that it may be a hassle to re-install a whole bunch of them. But when I buy a radio that I tune say to 118.1 (a common tower-frequency) then I’d be pissed If I would also receive a neighboring frequency (I don’t know the spacing, so I cant put a number on it). Then the device is defective and the manufacturer is repsonsible for providing either “a working device” or “money back”. Now when you find this out after 10 years of operation the tug of war between manufacturer and airline may be a bit different.
Still…. the device is defective if it can’t handle “use of the neighboring band in the spectrum”.
This is more akin to buying a house in a deeply rural area and someone sells the neighboring land to build a factory intended solely to make noise even though the previous use was a rest home. It would be fine if the new use was confined to the same power/noise levels as the old use was. It’s not. So just how much power should one have to reject? I can tell that a megawatt transmitter next to your house is going to get detected on your toaster, much more so on a radio. So it’s likely that the evaluation is what the currently neighboring power levels is what it was tested against, not some 30-40 year later potential increase of 100,000X the power.
Could this be resolved without replacing the entire altimeter? Could a filter be added inline with the receiving antenna? Or perhaps a firmware update for at least some models?
Yes of course. The technology certainly exists to filter out adjacent interference. I’m betting the FCC is salivating over the lucrative guard band frequencies and crying wolf.
It’s not crying wolf. In any other application, any receiver that was affected by a signal of reasonable strength (not outrageously strong) 200 Mhz away from its actual operating frequency would be considered a piece of garbage. The FAA had plenty of time to comment during the usual FCC NOI, NPRM, and R&O process, and time to resolve any operational issues after the R&O. They didn’t.
A commercial airliner costs as much as a hundred houses. What they carry is precious and irreplaceable. The cost of improved interference-rejecting radar altimeters likely isn’t near the cost of a hull-loss accident. Given that large airlines have annual operating expenses in the 10s of billions, is such a safety investment so impossible?
I don’t get how this is a spectrum issue; surely the problem is with the altimeters? I mean, if someone has an altimeter that malfunctions in the presence of green light, should the FCC ban transmissions in the visible spectrum?
I don’t understand why an altimeter requires a radio connection at all.
Perhaps it is altimeter based on RF reflectometery? Eg. downward facing radar measuring distance between plane and the earth.
Because it is a RADAR altimeter. They emit RF too, and I believe that the regulations today only apply to the transmit section but FCC wants to extend the rules to the receiving circuits
Other worded it already better than I can:
https://en.wikipedia.org/wiki/Transponder_(aeronautics)
Transponder is another thing
Transponder is the the number radio squak and ‘radar’ ping return gadget you hackers read with your RTL-SDR ADS-B readers.
Selected message codes, usually they will assign you your transponder code and you set it in your cockpit.
7500 seventy-fife hijacker with a knife (hijack)
7600 seventy-six radio needs a fix (busted radio)
7700 seventy-seven see you in heaven? (general emergency)
It also sends alt, sometimes gps, and your tail or flight number; you can also push a button and it blinks your ID on the ‘radar’ screen at ATC when they request it over the radio in crowded skies.
Because it’s a radar altimeter.
Your options to know the real hight above sea-level are rather limited, if not impossible without an antenna.
Barometric altimeter are affected by air pressure, and laser altimeter would messure to ground surface, in addition to be able to stop working due to interface.
The biggest question is why Boeing are allowed, by the FAA, to get away with using the cheapest, least tested options when making airplanes.
Came here to say exactly this. Why would the FCC regulate receivers rather than the FAA regulate the specifications for radar altimeters?
Because the requirements for the altimeters to function depend on the FCC’s requirements for other transmitters.
If the FAA regulates it, they have to change their test procedures whenever the FCC changes their transmit requirements. So two people have to change testing procedures. If the FCC handles it, only one test needs to change.
If you’re designing RF based safety equipment, then you probably want to have your filters as tight as possible to reduce accidental jamming.
Having a bandpass filter that’s 200MHz too large in one direction requires a special type of incompetence, or determined cost saving.
The FCC set of a block for radar altimeters, that block didn’t include the empty space between blocks.
“If you’re designing RF based safety equipment, then you probably want to have your filters as tight as possible to reduce accidental jamming.”
How tight your filters need to be *is determined by FCC requirements on other devices*. So it’s not nuts that the FCC also includes that test as well.
“Having a bandpass filter that’s 200MHz too large in one direction requires a special type of incompetence, or determined cost saving.”
No, there are good reasons why you might want the receive bandpass wider to eliminate double dispersion/attenuation near the band edges. I don’t know when that block was set aside – it’s entirely possible that the FCC was loathe to allocate near it because people knew about it, but it’s not a transmit allocation, so it wasn’t really codified. But over time, that stuff gets lost/forgotten. It’s not like the FCC is this wonderful perfect bastion of spectrum allocation.
I’m just saying I don’t know who exactly to blame in this situation. Radar altimeters are super-old, and the FCC requirements are very transmit-oriented. It’s entirely possible people just didn’t think about it.
just wanted to add to your list that barometirc altimeter is also affected by single-point failures. a second altimeter that works on a different principle is a sound investment even if you do have redundant pitot systems.
These are radio altimeters, which operate much like a radar fixed pointing at the ground. Radar technology is used because it is much better at seeing through weather, and can also be more accurate.
The pulses they emit are narrowband but some models of receiver accept a wider band as input, because it is cheaper. Thus the receiver can think its seeing return pulses that are actually from the 5G tower on that much lower frequency.
It hasn’t caused problems before because there weren’t that many users of those frequencies near airports.
EU planes are not affected because they use (in general) a receiver model that is more tightly specified. Not sure if that’s because of regulation or simple policy.
Seems like I’m gonna pay more for my cellphone and my router, just so some plane company never has to test a radio receiver again
My understanding is quite the oposite. The regulation will force plane company to test their receiver to make sure your cellphone will not interfere with it. If i understood correctly. However the guard band makes sense anyway, as transmitter might get out of tune due to damaged components. Doesn’t really matter how good is the receiver filter, you can always hit that sweet spot when your crystal develops a crack or some PCB overheats.
Some might rightfully call it overreach of the FCC to regulate receivers, but it’s technically within its charter, and they’ve done it before (e.g. 47CFR § 15.121 – cell phone frequency block on scanners)
And they killed billions of dollars of professional wireless that we use in entertainment
So airplanes not only cause chemtrails but radiate us to cause cancer?
You got that wrong. Planes _receive_ the radiation. That’s turning us into _lizard-people_.
I was under the impression this was a radar like altimeter. They shoot the radiation towards the ground and report altitude based on the signal returned.
If that’s how it works, then they’re responsible for causing cancer to anyone who lives in the flight path directly below final approach paths.
If you san see a transmitter tower near where you are, you should likely deploy some tin foil shielding to protect yourself.
The signal levels near the ground for these altimeters are hundreds of times lower than your average FM broadcast signal.
This is a problem that can be resolved without too much difficulty if the correct process is followed:
1) Commercial aviation altimeters can be retrofitted easily and at minimal cost. I am shocked to find out how wide the front-ends of their receivers are compared military equivalents. This, even without the new 5G could compromise safety of airline passengers. Military radar altimeters have strong receiver front-ends.
2) The regulatory agencies are themselves responsible for so-called “Spectrum Management”. The FCC has been too eager in its spectrum auction process to allow spectrum engineering to assess the risks. Once again, money talks.
3) Prescient for regulating receivers certainly exists: Look into the measures that the Television Receiver manufacturers needed to implement on TV receiver front-ends to protect against receiver overload, many years ago. These sets were originally designed with completely insufficient front-end selectivity and IMD characteristics. It took years, as there were already many millions of TV sets without good front-end characteristics in consumer’s homes, but improvements were made. The ARRL has to be credited with the success of this effort, educating both the public and the FCC in many hearings.
For the Radar Altimeter issue, the numbers are much less and overall cost very achievable.
Hopefully , government agencies will think before they are so eager to ring up the Spectrum Auction cash register, but will they??
Every time I have seen people talk about “this is a problem for radar altimeters” I have looked at the 200 MHz of guard bands around 4.2 GHz – 4.4 GHz spectrum that these use and only had 1 thought.
Why do these devices not have a better input filter?
Like it isn’t hard to filter away junk when one has that much space to the nearest legal noise.
Meanwhile WiFi has to contend with staying “perfectly” inside of 100 MHz at 2.4 GHz, with far sharper filter roll off.
I have though seen some people say:
“Radar Altimeters are so sensitive that if one replaces a cable with a new one of a different length, then one has to recalibrate the altimeter. So 5G is going to be hell being that close on the spectrum.” As if sensitivity to time is related to frequency…
A radar altimeter is more or less just sending out a pulse of RF power at a known frequency, and then measuring how long it takes before a pulse of RF power reflects back. The longer it takes, the further away the stuff it reflected of is. Here we can add a better band pass filter and it won’t effect its operation, as long as the band pass filter is at the same frequency as the signal we send out.
In my own opinion. If an altimeter gets effected by stuff outside of its own band of operation, then it is defective junk that shouldn’t be certified for use in airplanes. Ie, it should sharply filter everything outside of 4.2-4.4 GHz. A singla another 220 MHz away shouldn’t effect it in the slight.
It’s an issue of first come, first serve. Plus of course the cost and time of certifying anything that goes on airplanes. A $10 change in parts might cost $1,000,000 to make it legal, and it won’t be quick.
As to the regulations themselves, they are often written based on lessons learned from smoking holes in the ground. So they make sense.
Don’t forget that a plane is moving, so you have to take into account the doppler effect, so the filter can’t be “exactly tuned” for the transmitting frequency, but have some wideband.
First intelligent comment made.
For the armchair RF engineers, those who think it is that easy doing RF on a high-speed, moving vehicle, feel free to show us a few of your designs (that will pass certification).
I do however agree that for a company like Boeing, there is no excuse (considering the not-so-small amount of “donations” they receive from US taxpayers’ money). It is certainly (or should be) well within their capability.
The doppler shift of an airplane moving at 1000 kph relative to the ground is about 4 kHz. If you observe that much difference in the radar return signal, pull up!
Yes, if the ground approaches at that speed, one has a rather imminent problem.
I did say bandpass filter.
(preferably from 4.2 to 4.4 GHz. With a -15 (or more) dbc roll off for the first 100 MHz around that.)
If you start attenuating right at your band edges, you will affect phase on ether end of your band. With common filter designs anyhow.
Which might be workable, but needs to be thought out.
Quote my fields prof: ‘If this stuff was easy, History majors would make big bucks right out of school. Quit your bitching.’*
* yes I know he expressed that in a funny way…EE getting easier wouldn’t make History harder. But you get his point.
There is no real need to take our actual signal out to the band’s edge.
In practically all RF applications, one will have some buffer space near each end of the band so that one’s bandpass filter minimally effect the signal.
Especially here were there is 200 MHz worth of bandwidth to play with, + another 100 MHz on either side explicitly reserved as a buffer. (200 MHz buffer in the US so there is it more trivial.)
“In practically all RF applications, one will have some buffer space near each end of the band so that one’s bandpass filter minimally effect the signal.”
Yep. And you know what’s the exception to that “practically all”? Radar transmitters.
They *do not care* about the effect that the transmit filter has, because all it does is generate the signal.
Think about it as a TDR. You start by generating the absolute freaking fastest impulse you can (wideband), send it out, and if you’ve got a wider input, the reflections come back looking like the impulse response of your transmit filter. So it doesn’t matter what your transmit filter was.
“As if sensitivity to time is related to frequency…”
Of course it is. Kramers-Kronig relations. Filter rolloff determines the minimum dispersivity of any physical filter.
Technically yes, to have a pulse of RF power one will have AM modulation, and associated frequency content around one’s carrier, but that weren’t the main argument.
The main argument were that time domain reflectometry precision/”sensitivity” is a bit different that sensitivity to other frequency content or general noise. Ie, if we filter away undesirable content it won’t effect the precision of our TDR. With the exception of our filter being so sharp that it filters out the aforementioned AM modulated content of our pulse used for the TDR, but that is an exceptionally sharp filter.
But one won’t need 100’s of MHz of bandwidth in one’s filter in this application.
A fair few filter curves I have seen discerned pilots pull out have the lousiest filter responses I have ever seen. As if the device didn’t have more than a single stage LC filter. It can be much better than this without effecting the measurement.
No, you’re talking about the frequency response.
The phase response of a real filter is absolutely, 100% restricted by its frequency rolloff, because the device is causal. There’s a reason a Bessel falls off so slowly. Kramers-Kronig relation.
Any filter introduces group delay variation around the edge, and that group delay variation increases as you increase the order. Sharper filter, more dispersion, and the group delay variation spreads more into the passband.
The *simplest* rangefinding you can do (mix with itself, lowpass filter) is quite sensitive to both frequency and group delay differences between the transmit/receive paths.
I’m not saying you can’t deal with it – of course you can. But in the super-cheap ultra-simple design, it’s not as simple as just sticking a high-order filter and everything’s exactly the same. The solution, obviously, is “don’t do the ultra-cheap design.”
For aeronautics, the ultra cheap design is a bit inept.
“Any filter introduces group delay variation around the edge”
Isn’t that why we use a band pass filter? So that we only operate within its pass band were it shouldn’t noticeably effect our signal, instead of operating close to where it starts filtering and actually effects our signal of interest.
“Isn’t that why we use a band pass filter? So that we only operate within its pass band were it shouldn’t noticeably effect our signal,”
Nope. You’re still thinking only in the frequency domain. Just because you only have 0.1 dB of loss doesn’t mean the signal’s not affected. Go look at the group delay of typical “fast’ bandpass filters, like Chebys or elliptics. The group delay effects occur well inside the band, even where the attenuation’s minimal.
If you care about the group delay of the filter, the best you can do is a Bessel, and those have very slow rolloff.
“The group delay effects occur well inside the band, even where the attenuation’s minimal.”
Of course one needs to look at the specification that is relevant to what one tries to do. And ensure that the issues caused by the filter is outside of what one actually works with.
But I will still say that with 200 MHz of bandwidth, it still seems inept to have such a lousy input filter as some altimeters have, while others don’t. There is radar altimeters that have sufficient filtering to even consider noise within the guard area to be of minimal impact. (though some of these are digital in their nature, but not all of them.)
And then there is also the 100 MHz guard area around the band, for being down at 4.3 GHz, there is a lot of bandwidth to work with here.
Or we can just accept the group delay issues and design around them further down in our system. After all, group delay is dictated by the filter. Ie, an offset, all though varying over the span. But one could just “ignore” the measured value in when the group delay isn’t satisfactorily flat and only sample and hold it when in the flat region. If it sweeps back and forth at a kHz or more, the sample and hold fashion should still update faster than what any mechanical system in the airplane is able to react. This isn’t that hard to implement in the analog domain with components from the 70’s, and for the last 2-3 decades we can just characterize the group delay and compensate for it digitally. (and of course that would need to get certified, but from a technical standpoint, better filtering isn’t the end of the world.)
That’s .. not how homodyne radar works. They don’t know when the received component is going to be at a certain frequency.
Again: of *course* you can deal with it. But the entire issue is that the way things were spec’s between the FAA and FCC, the device’s receive band was wider than it’s transmit, and the FCC just screwed it up.
It’s not bad design. The FAA said they have 10% bandwidth. They used it.
Good answer. I would add that it should be up to the FAA to require the altimeters to function within the limits determined by the FCC. For example, if the altimeters were incorrectly designed to, say, operate at 5 GHz, that certainly would not be the fault of the FCC.
“For example, if the altimeters were incorrectly designed to, say, operate at 5 GHz, that certainly would not be the fault of the FCC.”
Sure it would. If they *transmit* at 5 GHz and the FCC said the design was OK, that’s absolutely their fault.
This is just extending the idea as to what “FCC approval” for a device means. Before it meant that the design would not transmit outside of FCC regulations. This would add that a device would not fail to function in the presence of other FCC-compliant devices.
“This would add that a device would not fail to function in the presence of other FCC-compliant devices.”
This has been part of the CE requirements for EMI for a long time now. Though, this is only applicable on the EU market however.
But for a CE certification a device needs to still operate sufficiently to perform its function when exposed to EMI from other devices expected to exist in its operating environment. (And yes, the text is a catch all. A device meant to operate in an EMI shielded room can be far more sensitive to EMI than something meant to operate in an industrial shop next to the plasma cutter.)
You misunderstand. My example assumed the transmitters still transmitted at the FCC spec’d 4.2-4.398 GHz but the receiver was mis-designed to receive at 5 GHz. That’s not the FCC’s fault because they don’t have a requirement for the receiver. Maybe a better example would be a radio receiver that has poor selectivity and picks up two nearby stations that are separated by the FCC spec’d gap. That is simply a poor design and does not need to be regulated by the FCC. Consumers will take care of that problem by not buying the design. In the case of the altimeters it is a safety issue that should be regulated by the FAA, just like any other FAA regulation.
“That’s not the FCC’s fault because they don’t have a requirement for the receiver.”
Yes, that’s the entire point here. It’s a mistake that the FCC requirements are purely transmit-focused.
“would be a radio receiver that has poor selectivity and picks up two nearby stations that are separated by the FCC spec’d gap. ”
This is the point: the requirements on the receiver’s possible interferer are specified by the FCC, so it makes sense for the FCC to be the ones confirming that. Asking companies to interpret what *they* think the nearby interferers can transmit at is just adding a layer of possible confusion.
Proximity – Power. Distance. Frequency.
The previous user was a few hundred watts, from 30,000 miles away, and the same frequency that is contested. Now it is kilowatts, a few thousand feet away. Every filter has a limited effectiveness – the one that they used for the previous neighbor wasn’t designed with the expectation that the FCC would re-zone the allocation.
Radar altimeters work by producing a chirp tone (swept RF carrier) and then mixing the received RF back in with the transmitted. This mixed result is then LPF’d Yielding a low frequency that is proportional to distance. (This is also how cheap prox fuses work)
Its a cheap, effective and ancient design, but its wide band and prone to interference.
The altimeter has a dedicated part of the spectrum, and a guard spectrum around it.
It’s input filter should be sufficient to ensure that any undesirable interference outside of that guard spectrum is sufficiently attenuated to not effect its measurement.
If it is effected by stuff outside of this very generous amount of spectrum allocated to it, then it has a frankly lousy input filter and shouldn’t be certified for the application.
If the altimeter is old and designed without expecting all this extra usage of the RF spectrum, then I would say that it is just outdated.
Just like a Fluke from the 70’s is outdated and can’t be used by an electrician (depending on jurisdiction) because it doesn’t have a CAT 2 rating written on it (even if the exact same meter is still built with that rating written on it), then an airplane shouldn’t be allowed to use an altimeter suceptible to interference far outside of its band of operation.
Equipment gets outdated over time, and more often than not ends up loosing the certifications it once had. This isn’t anything special. New standards and regulations and industry practices comes in every now and then, and some equipment simply isn’t good enough for the new times.
Understandable. I don’t know how it is (was) in other countries, but here in Germany, there was a similar issue with TV sets. If I remember correctly, it was like this..
Back in time, TV sets originally had good filters, shielding, lots of tubes and were pricey.
Then, in order to cut costs, TVs were build much more cheaply and manufacturers and the postal office had a “gentleman’s agreement”.:
TVs that did turn out not to be robust against TVI/RFI would be fixed for free by manufacturer once the customer in question complains.
The end of the story was that tens of thousands TVs failed to work properly once CB radio in the 1970s started (~1975 here in former W-Germany).
And that wasn’t just the case when CBers used linears.
The TVs were so badly shielded/filtered that even legal 1W transmissions caused TVI!
Thus, both CBers and radio amateurs were blamed by ordinary people for causing radio interference.
This caused a lot of bad blood, though it was all the TV makers fault. And the fault of a semi-corrupt postal office/regulator, too.
What did we learn from this? a) Greed isn’t cool, neither is maximizing profits. It will hurt the people in the process. b) That “cheap” TVs in plastic chassis from TVI ridden countries (Taiwan, Japan etc) were sometimes superior than venerable German companies/brands. ;)
I think it was back in 2011 that Lightsquared had their proposed broadband satellite service cancelled over potential interference with GPS receivers. Lightsquared frequencies were right next to those of GPS. It had been initially OK’ed by the FCC I think, but Garmin did a study after the approval that showed that some GPS devices wouldn’t work due the the strength of the Lightsquared signals. I thought that was kind of ironic at the time as I thought that the department of defense had designed GPS to be pretty immune to interference and if there was a problem it was probably due to taking short cuts in the receiver design.
It certainly sounds like an FAA problem.
The real problem is that something is made “that works” without any particular care to meeting a spec (if there is a spec).
There is lots of crappy engineering out there justified by “it works” with serious problems under the hood.
Indeed, this is where regulatory action has its place.
We were talking about this over at Hackaday HQ last week, and I was wondering how any of this was just coming out _now_, when there was a public request for comments a couple years ago. Where were the airplanes back then?
And the answer is that they didn’t know. They just didn’t notice that their radar altimeters were reading out of band until there was significant band-adjacent radiation.
Which puts the FCC in the super tough spot of having to regulate 5G “unfairly” to keep planes in the air. You can see why they’d not want to be in this position again, and how the “obvious” solution is then to get a mandate to verify receivers.
I wonder if this could be drafted a little more narrowly, though. B/c the problem will only really occur when there’s safety-relevant equipment on the line, as here. Otherwise, they’d just say “tough nugs” to the receiving device.
I think it might make more sense to actually have multiple levels of FCC approval. Like, all devices have to have transmit level verification, because it’s the law. But make receive interference verification be optional (I dunno, “gold” level certification), and then have other agencies specify that that level is required for certain devices.
Having searched around for various documentation on a range of altimeters and studies, I frankly wonder more about the build quality of the devices above most other things.
The ITU did a study back in 2013 on a range of altimeters, both analog and digital ones. And they reported a frequency stability from 100 (hopefully Hz, it didn’t specify) to “No crystal” and “Not applicable”, how ever stability of an oscillator can’t be applicable for this application is beyond me…
I have seen some other papers (don’t remember where.) that talked about a filter response of about -6 dbc from 4.2 GHz down to 4 GHz. And that isn’t much of a filter.
Though, most of these devices works by simply sending out an FM signal and watching how far behind the reflected signal is from the output. Simple and works and won’t be effected much by an excessively sharp band pass filter, so to a degree a filter could be retrofitted to older altimeters of this design.
But I do wonder what level of cost cutting these devices are seeing. It isn’t that hard to build a proper filter.
When given a chunk of spectrum, one should more or less always consider everything outside of that spectrum to be a noisy hell that should be filtered, after all, one has no clue who can receive that spectrum and for what application, so expect to see anywhere from a few tens of mW and above to be seen received by one’s antenna. Preparing for more than a watt or two is though likely overkill. Getting that as a neighbor is rather rare, and then one should file a noise complaint. After all the FCC and other regulatory bodies elsewhere in the world does ask the neighboring spectrum owners about potential issues for a reason.
But it seems like builders of radar altimeters just looked at their 200 MHz of spectrum + 100 MHz (200 MHz in the US) of guard spectrum on either side, and the still unused spectrum around that and simply said “Who needs a filter? I don’t!”
I simply say that it is the aeronautics industry’s problem. All altimeters that don’t sufficiently filter content outside of the guard spectrum simply shouldn’t be certified for used.
“how ever stability of an oscillator can’t be applicable for this application is beyond me…”
Because it doesn’t matter what the actual transmitted signal is, so long as it’s stable on ~microsecond-ish timescales, and it’s basically impossible to have an oscillator so bad it wouldn’t work. You mix the reflected signal with the transmitted signal.
“Simple and works and won’t be effected much by an excessively sharp band pass filter,”
It just depends. If you’ve got a nice flat response transmit/receive antenna, a wider receive filter means the implementation’s trivial. You just tap the transmit, mix, and filter, and you’re done. If you use the same filter on the transmit and receive side, it’s a bit more complicated, because the receive signal sees the filter twice. I mean, it’s not difficult or anything, you just put a second filter after tapping the transmitter. But, OK, now signals are attenuated more, etc. There might be specifics in the design that make that less effective, I don’t know.
I have worked with people who’ve built the equivalent of these for scientific observations, though, and it’s definitely true that ideally you don’t want the same transmit filter as receive.
Letting government regulate receivers is a dangerous proposition. A slippery slope into letting them control what gets into the hands of consumers.
That’s strange. 3.7-4.2GHz has been (and still is) in use for satellite downlinks and terrestrial microwave links for more than half a century in all parts of the world. Never heard of an airplane falling out of the sky because someone switched on his telly or picked up the phone…
All for 5G Marketing Hype.
It seems that no one remembers that 5G hurts doppler weather radar.
https://www.9news.com/article/weather/weather-colorado/interference-5g-weather-forecasting/73-a0b30746-33f4-45ef-bca7-a7bf8dd9bbc4
i agree that it is primarily an FAA issue but i’m surprised to see such a clear case for it being the FCC’s domain as well.
“Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.”
we’ve all seen that dozens of times and i’ve never really known what to make of it. but now i understand, a device which is an unintentional receiver of band outside of its license stakes a claim on that band almost as surely as a transmitter does. the fact that the radar doesn’t perform as required in the presence of licensed intereference produces a political pressure on the fcc to honor that illegal staked claim. i think i’m glad they’re ignoring that pressure and allowing developmment of adjacent bands but i agree that the airwaves will be in a stronger position if the pressure is anticipated and managed ahead of time by regulating receivers.
thanks for the article
The adjacent band was repuposed to a different type of service (5G cellular), possibly bringing new, unanticipated requirements. Radio design involves a large set of tradeoffs, including the steepness of front end filter roll off, which trades off with in band ripple and loss, which affects receiver noise figure (and sensitivity). Improving roll off (or out of band rejection) comes with increased cost, size and weight and possibly power consumption. No one designs for unknown, possible future requirements for significant additional rejection, and these are systems with long product lifetimes. Also, receivers are sensitive to both in band and out of band signals. While filtering improves out of band rejection, splatter from out of band transmitters can be in band for the receiver, raising the noise/interference level and reducing sensitivity. There was a lot of finger pointing and poor coordination on the reallocation, but it’s naive to say the existing equipment was poorly designed or cheap. This comes down to a coexistence issue, and could have been handled better earlier in the process.
For anyone interested in more background:
The FAA funded RTCA study, parts of which are disputed by the cellular industry group, is here:
https://www.rtca.org/wp-content/uploads/2020/10/SC-239-5G-Interference-Assessment-Report_274-20-PMC-2073_accepted_changes.pdf
and slides here:
https://www.rtca.org/wp-content/uploads/2020/12/Slides-5G-Interference-Risk-to-Radar-Altimeters.pdf
One of the CTIA responses is here:
https://ecfsapi.fcc.gov/file/10304185435620/210304%20CTIA%20Further%20Response%20to%20RTCA%20Report.pdf
Also note the reason the FCC is involved is that the FCC is tasked with managing the public radio spectrum resources in the U.S., which includes setting transmitter standards, allocating uses of bands, etc. The FAA is responsible for flight and physical airspace issues.
In fact, for all the people talking about “the FAA should be doing this” – this really was the entire problem. The FAA set the original interference limits, and *they* called for an exclusion band (+/-10%, or *420* MHz) that didn’t match the FCC’s definitions.
The people talking about having the FCC specify the requirements for receivers know what they’re talking about here – the entire problem came because one branch set the transmit requirements (the FCC) and one branch set the receive requirements (the FAA), and things got lost in translation between the two.
And for those wondering why satellites don’t cause problems, it’s because their power levels are soooo far below that it doesn’t matter.
The slides you linked are a great intro to this, and basically confirm everything I suspected.
Worth noting that no American plane (or from other places) equipped with this type of altimeter has been affected by 5G. At all.
The whole 5g is going to make planes drop out of the sky was an issue the airlines made in the US only
Did anyone hear/read about any airline saying they would no longer fly to France or Japan because they deployed 5G and their planes would run the risk of falling out of the sky if they flew to those locations?
Well, apparently in other countries they have different bands (effectively larger guard band) and lower power levels, so it’s not a direct comparison. In the U.S. these bands near airports were not turned on until after further testing. In any case you would not want to wait for airplanes to hit the ground before deciding if there might be an issue.
So what you’re saying is that the altimeter has to work with 2 bands, one here and one there. The one there worked fine, but the one here did not.
Actually the other way around, that the similar 5G bands allocated are a little further away in frequency (not all countries/regions have the same 5G bands – and there are _many_), in some cases have geographic limits as to how close to airports they can be (in Canada, IIRC), and have antennas angled downward instead of horizontally, reducing the power on the main lobe that would affect an aircraft (in Japan, Europe, IIRC).
Guys there is so much misinformation in the comments I don’t even know where to begin.
I guess firstly, many of the altimeters in question are on older aircraft, and have been safety certified, a process which would have taken as long as a decade to certify. Aviation is a slow but safe process. It’s not a simple matter of upgrading or replacing it. It would need to go through a lengthy process to implement, test, prepare upgrades, then schedules fleets of aircraft for maintenance and time is needed to do this safely. The FAA has many procedures in place that requires all certified aircraft to abide by, and those rules are written in blood. The challenge is that these instruments were designed many decades ago, and they have been built well, to be resilient to all that has changed especially in this last decade or two. The problem was that a new technology was authorized that COULD potentially interfere with it, and instead of placing reasonable safeguards in place and having sufficient time to test and validate, the telecoms were eager to turn the radios on.
Why is it such a big deal? Well this is the radar that helps guide the plane down to the ground from about 2500 ft. This is the FINAL instrument that is used to determine the altitude and is used in conditions when the pilot cannot visually see the runway due to cloud coverage, fog, or other weather. The pilots may sometimes only finally see the runway lights just 200 ft before touchdown. They rely on instruments to get them to that point safely. If there is ANY chance of that instrument may not work, then the safest thing would be to not land at that airport. They are doing it so they don’t risk YOUR life. Sure, maybe it’ll be OK, sure, MAYBE it might not interfere, but aviation isn’t about taking unnecessary/undue risk.
The reason this isn’t an issue in other countries is because they didn’t license the same bands for 5G. So this issue is specific to the US where the 5G band licensed was adjacent to this technology.
The FCC trying to regulate the receivers doesn’t really make sense as the receivers already go through their own rigorous certification process. Even if receiver were regulated, at the time, they would have passed. It’s the intermixing of new and older generation technology that is the problem. Do you complain that your TV from the 1990s can’t pick up modern channels? That it displays static, and noise, that it can’t tune into the modern HDTV channels? No that’s not fair, it was designed to work with previous generation broadcasts (analog), and nothing was wrong with that TV. When a newer technology came out (digital TV), they had to separate the old from the new so that at least for a while it wouldn’t disrupt those TVs. Then, slowly over time, they deprecated those older broadcasts and sent out notices saying those services will be shut down. It didn’t happen on a whim, or over night, or even over a period of a year, it was slowly phased out over a period of a decade and world wide, that phase out from analog to digital is still slowly happening (20-25 years later).
The lesson here should be for the FCC to do their job and research their bands, and allow for those bands to safely be utilized without causing this kind of chaos.
Yes, it seems many people in the community here don’t have a radio background. This general topic is called ‘coexistence’ in the industry, and a lot of efforts went in to 5G to study and head off many coexistence issues. The FCC and the two related radio industry groups really dropped the ball on this particular one. They had four or five years to work it out, since the FCC announcement of re-allocation in 2016. Note other countries do have adjacent bands, but a little larger guard band.
(see slide/page 12 in the slide deck I linked above for a band summary)
“The FCC trying to regulate the receivers doesn’t really make sense as the receivers already go through their own rigorous certification process. Even if receiver were regulated, at the time, they would have passed.”
The receivers are regulated by the FAA, who specified guard bands of +/-10%, or 420 MHz. The FCC specified guard bands of 200 MHz. I’d like to believe that if the FCC had been the one spec’ing the receiver interference range, they wouldn’t’ve made that mistake.
“The lesson here should be for the FCC to do their job and research their bands,”
Yeah, basically. I mean, it’s not exactly that the FCC has to *regulate* the receivers, but what it should be doing is making sure to understand both the transmit and receive bandpasses, because thinking that they’re always the same is a huge mistake.
It’s totally the FCC’s fault, although the basic idea of having the FAA be the one who sets interference limits (and then the FCC just… not knowing what they were, apparently) isn’t the greatest way to set things up.
Doesn’t this essentially mean that anyone who can build a modest C-Band transmitter, which isn’t all that hard, could jam aircraft radar altimeters? That would seem to be a security issue. Less promiscuous filters and some smarter signal coding might be wise for reasons beyond 5G.
Using FMCW altimeters in safety-critical applications was never a good idea. Never. It’s insane how they are often directly wired into flight control systems, overriding other systems.
They should have been replaced with digital pulse radars literally decades ago.
Sounds like a good technology – they could employ spreading codes, etc. to be more immune to interference.
Great idea, but who’s going to make the billions in investment to design and certify the new technology for old planes? We still have WW2 Era planes with the associated technology flying, and each plane type would require a different device certificated to work with it.
As a former broadcast engineer and Ham Extra, I was taught that the FCC was there to regulate transmission (to preserve the spectrum by avoiding contention). Any signal could be received (and used) if you could pick it up. That longtime fundamental practice was discontinued when the mass media producers (Sat TV) got the FCC to make receiving their signals illegal (before encryption).
I alway thought that if they don’t want me to watch their shows, they shouldn’t bombard my house with their signals !
Unfortunately, the head of the FCC is a political appointment and the commission are more concerned with “selling” spectrum than preserving it.
A similar situation concerns the Legato terrestrial 5G internet system that’s been “brewing” (and litigating”) for the last few years. Their licensed frequencies are very close to GPS, and GPS users have been concerned that “Old” receivers would be swamped with the Legato signal. At the time, I thought that GPS was too critical to take changes with, even though bad GPS were identified as the source of the potential problem. I was wrong.
Radar altimeters have been around for a long time. It makes sense that some of the old ones are not very selective. Perhaps it’s time for them to be replaced.
With receiver regulation, we might see the little vans the UK used to have, and start a whole new taxation scheme for those in the US :-)
Hopefully theFCC would provide long term technical specification guidance and be tough on those that want to keep their 30 year old equipment.
How could the altimeter problem become an issue so suddenly? 5G has been around for quite a while (and in documented plans for a long time).
Two key points that no one seems to be considering.
Radar altimeter specs were developed back in the WW2 era, and many of them are still flying from back then. As a result, band allocation in that area has been designed on the real world performance of the existing users, and while the receiver bandwidth may not be codified, it has always been protected, meaning that the manufacturers never had a reason to restrict the performance of altimeters.
In order to make this change, old planes will either have to be grounded, or manufacturers will have to spend billions designing and certifying new models of altimeters that will work on old model planes, which were designed and built according to the rules at the time.
What FCC is doing is rewriting the past, and ignoring the reality on the ground and saying that the old technology should be able to comply with modern guard band width instantly, and tossing out years of practice and implementation that gave aviation more space.
And while for the armchair experts, it may seem simple to stick a bandpass filter to chop down the receiver sensitivity, you end up attenuation the edges of your assigned band, and start running into problems with excluding doppler shift reflections, and so you can’t just add in a new filter on an old unit, because it will limit its design functionality in unpredictable ways.
The FCC is negligent to sell off this spectrum without considering that there is a wide spread life safety use of that edge spectrum already, and while they may wish it was not there, they can’t just rake in their billions and ignore it. FCC should have actually done frequency coordination ahead of time, and developed a plan to clear the spectrum at issue PRIOR to selling it, but they have shown from the beginning that they are in bed with the telecom companies, and instead of dealing with the issue through adult and technical means, they have decided to use the nuclear option and invade what has always been open band space for altimeters, and force the airline industry to spend billions and try to catch up with a new change that the FCC decided to make.
And yes, it’s fully possible to design a radar altimeter today that can ignore 5g transmissions, but that was never before a need or test parameter or design requirement, and adding that will require redesigning equipment that has some of the most stringent testing and certification requirements in the world, and it takes time to develop it, develop test equipment and procedures to test against an entirely new spectrum landscape and potential interference source, and certify it to FAA specifications, and this process is going to be simply too expensive for many users with low volume old altimeters, meaning that they won’t be able to fly anymore, because no manufacturer is going to build and certify a custom altimeter for their plane unless they are willing to pay for the true costs of the development.
So basically the FCC has decided to force the aviation industry to spend billions of dollars, so that it can allow the telecom industry to make billions of dollars, while it makes billions of dollars, rather than taking away a little sliver of spectrum from the telecoms to protect the existing users working under the existing rules.
The crux of the problem has more to do with the power levels of the new base station equipment. The specifications for the 5G advanced antenna systems are 16×16 arrays with an EIRP in thousands of watts. These base stations are able to create focused beams with blinding levels of power. And given that the height of these towers puts the aircraft in line of sight during the last segment approach to the run way where its use is most critical, it’s not a good situation. The altimeters certainly have pre-select filters before the LNA, but there is no such thing as a brick wall filter. Typically a 3 or 4 section band pass filter will achieve 20 – 30dB of rejection going from 4.2GHz down to 3.98GHz, but that’s not going to be enough rejection to keep a receiver from being blinded by a base station if it is operating <1mile from the runway. You can certainly add more sections to the filter but that will compromise the performance in sensitivity and accuracy. Each section added to the filter adds group delay variation over the pass band. A five section filter can have up to 10ns change in group delay between the 3dB point. At 2ns per radar foot that group delay will smear the target return up to 5ft. Sensitivity is also a critical parameter that can't be compromised. Reflected radar signals decay at 1/distance^4 and that requires receiver sensitivity power level to be much lower than a point to point radio receiver where the path loss is 1/distanced^2.
It's not like the FAA let altimeters go onto aircraft willy nilly with out regards to performance or interference. The RTCA has a very comprehensive radar altimeter performance standard with DO-155. And the SAE has a comprehensive interference standard ARP5583 that applies to radar altimeter and other radio equipment installed on aircraft. The introduction of the new c-band operating base station equipment will impose some updates to be made these standards, but it doesn't seem like any of the necessary information on the technical parameters needed for these changes were communicated leading up to the FCC 107 auction. It doesn't look like this has been completely ironed out because the telecom carriers are still conceding with operational distance limits around airports and limitations on the upper end of the spectrum the spectrum they can turn on.
The part people (not just on HaD) keep forgetting:
FCC certification is not a guarantee of fitness-for-purpose.
FAA certification *IS* such a guarantee.
The FCC is basically looking at the fact that when the FAA certifies a mis-engineered device, this forces large, expensive (as in, “ties up spectrum that people want to pay money to use”) guard bands. Further, if something goes wrong, fingers point at the FCC even though it’s not their problem. And, “expanding turf” is never viewed as a bad thing; size, complexity, and relevance (to high-dollar users) of the turf directly controls appropriations, among other things.
They weren’t “mis-engineered” at the time. That’s not a practical or useful point of view. The FCC handles allocation of the US spectrum so it can be effectively used, and in many cases not interfered with. They set the standards on out of band emissions, for example, some of which vary depending on the services in the adjacent and further bands in question (esp. for critical military, police, aircraft, medical equipment, etc. bands). So this very much is under the FCC’s charter. The related industry groups usually work together to hammer out specs and test requirements to ensure coexistence. This was not done in this case, even though the FCC announced the change in 2016, about five years ahead of time.
No, the real issue is how greedy the Cellular companies are and how much regulators are willing to listen to them. I work with C-Band and when this was first proposed I thought “okay, limit the frequencies where interference can occur and you don’t really need ANY of the requested C-Band bandwidth to be cut up. With segmentation of use and careful deployment we can share that spectrum. Satellites are line of site, and anything using C-Band isn’t on a vector that should be troubled by ground interference, so long as you have the towers pointed at the ground and make allowances for LICENSED transmit and receive stations.
Then I saw the later plans…and boy were they grabbing for the brass ring. Unlike the 2Ghz spectrum grab by Nextel, where the carrier had to pay for and replace legacy equipment that was no longer useable after the change, this new plan was just “give the Cell companies what they want….and too bad if it’s your spectrum that’s getting screwed!” The arguments for Fiber and Ethernet distribution were pointless. One of the things satellite does well is cover terrain. If you have a clear view, you have a signal. Not every part of the country has fiber or broadband access to distribute media.
So this latest issue with the altimeters is being thrust upon the aviation industry by the same bunch that was completely fine with just turning over spectrum to Cellular carriers with very little oversight. Why must other industries analyze the effect of new deployments?? C-Band satellite and radar altimeters got along just fine for decades…now both are being inconvenienced by a problem they didn’t cause. All so some regulators can make sure they serve on boards after they leave government service and can enjoy a golden parachute.
I have 5G. When it works it is great, but 4G wasn’t so awful that I couldn’t do my work. At what point do we look and say “that’s enough. We should stick it out and concentrate on other things?”
When you hear testimony about Broadband rollout and Unserved or Underserved communities in the country, you wonder how they allow companies to roll out new technology without fully deploying what they already have.
Most comments on the article are by people who have *zero* understanding of aviation, aviation equipment lifecycle, and the enormity of the costs of changing all the radar altimeters in the aircraft. The FCC was warned about the impact of the 5G band on existing equipment years ago but auctioned off the bandwidth anyway. A group of experienced electrical and radio engineers flagged problems with the auction from the beginning. If you want a good description of the real problem by someone in the aircraft industry I highly recommend the Blancolirio Youtube channel.
https://www.youtube.com/watch?v=942KXXmMJdY