What happens when part of a radio transmitting service listened to by over half the country needs to be replaced? That was a recent challenge for the BBC’s Research and Development team last year, and if you’re from the UK — you wouldn’t have noticed a single thing.
[Justin Mitchell] is a principle engineer in R&D at BBC, and just this past year had to transition the audio coding system installed in 1983 to new hardware due to failing circuit boards and obsolete components. The encoding is used to get audio from a central source to broadcasting towers all over the country. The team had to design and build a replacement module that would essentially replace an entire server rack of ancient hardware — and make it plug-and-play. Easy, right?
The new module called the NICAM Codec takes care of data combination, RDS data transmission (this is what displays song names on your car radio), the 6-channel audio coder, CRC inserters & checkers, decoding, and data splitting. It’s all based off of a Xilinx Zynq chip which uses both an FPGA and ARM processor, which had to comply with all European directives to be CE marked.
On November 20th 2015, the system was installed in the basement of the New Broadcasting House, and at 4:15AM the system went live without a hitch — and no one the wiser.
Are you part of an engineering team that solves problems the general population doesn’t even know exist? Do you have any stories about how you saved the day, and no one even knows it? Tell us about it!
[via Hacker News]
Cutting edge tech to reproduce inefficient 1970’s NICAM tech. Why?? Use 21’st century techniques and do it in a Raspberry Pi instead. You’re going to have to upgrade the remotes soon anyway BBC.
Go easy on those who don’t have your vast experience with embedded systems. We all know the Pi will obviously outperform a FPGA, but those guys might not have heard of it.
Hurry! I hear that Elon Musk is in need of an Arduino sketch from a self balancing robot to help with his Space-X thing’s landing.
+1
However, I heard that the recent global shortage of NeoPixels will render Arduinos completely unusable.
Well meme’d, my friend.
I sence some sarcasm here, but I’m not sure it’s fair too say a Raspberry Pi outperforms an FPGA. They are different classes of devices all together and as a result have different strengths and weaknesses. The Xilinx Zynq and Altera’s SoC have ARM cores, just as Raspberry Pis do. They would blow any ARM away at most raw data processing (audio/video encoding and similar tasks). I’ve worked on developing broadcast video encoders that use FPGA’s to real-time encode multiple channels of SD/HD video, which is something I don’t think a Raspberry Pi can do.
The Zynq core is available as the $150 Parallella-16 (dual core) hobby board, and indeed it specs for the CPU are the same (per core) or lower than the pi2 (a quad core). The FPGA component does change the nature of comparing the transcoding codec performance, but note the pi2 comes with several GPU based video format decoders pre-enabled from the factory. Note most people have to compile some repo to get the pi2 to work well, but h264 decoding 1080p video works fine over HDMI.
The Zynq only makes sense if one is doing custom codecs, or avoiding patent licensing costs. The financial cost of the Zynq Cortex-A9 CPU hardware doesn’t compare to a $35 pi2 Cortex-A7 for most operations. However, there are notable performance differences in how these CPUs handle floating point arithmetic for example.
In this case, streaming/decoding video data across a network is quite doable for the pi2, but transcoding HD in real-time is likely not computationally feasible. Note there are exceptions, given modern cameras do often offer h264 encoded raw data from the camera chip itself, and I clock in at around 15% cpu use on a single core to stream raw 720p camera data across a LAN.
Recommended reading:
https://en.wikipedia.org/wiki/Blind_men_and_an_elephant
Decoding is trivial compared to encoding (intentionally so). To suggest that you know more about the matter than the Beeb R&D dept (who I’ve always found impressively knowledgeable) is laughable.
I’d lay good money that the CPU core is not doing any of the ‘heavy lifting’ at all, but just the management & IO handling.
“Go easy on those who don’t have your vast experience with embedded systems. We all know the Pi will obviously outperform a FPGA, but those guys might not have heard of it.”
I laughed.
In the case of compression, the GPU in the Pi will likely outperform the Zynq.
This may come as a surprise to people around here but when you have critical infrastructure in need of an update you don’t replace it with a kid’s toy simply because you can or it would be cheaper. There are too many things at play here.
I love the Pi. But critical infrastructure which could be needed in times of a national security emergency just isn’t the place for one.
This is a good response. There’s no need to be an asshole like 0xfred.
The Pi has some serious hardware on the board. Just because it is “Marketed” as an educational platform doesn’t make it a “Toy”. Actually I find your post condescending.
The Raspberry Pi and other similar low-cost single board computers are simultaneously the best and worst thing that have happened to hacking. On the positive side, it allows people to quickly put together solutions without doing a lot of heavy lifting. And that’s great because there are tons of hacks that work out great with it. For many applications, it’s grotesque overkill, but given the cost, who cares?
The negative comes when you have that perfect storm on HaD: People who don’t understand the limitations of the Raspberry Pi who are presented with articles that don’t discuss the requirements of the application. In other words, people who don’t know what they don’t know. The result is an indirect form of the Dunning-Kruger effect, except instead of overestimating their own capabilities, they overestimate the capabilities of hardware they use.
Here’s a suggestion: Instead of assuming that the BBC engineers don’t know what they are doing, flip that around, and then look at the capabilities of the Zynq and ask yourself, “what must the requirements have been to require such hardware?” Doing that will do two things: First, you’ll get a better appreciation of an technical domain you don’t understand. Second, you’ll learn more about the edges of the Raspberry Pi and see where it’s limitations make it unsuitable for different classes of work.
You would chew up the Zynq replicating the GPU in hardware on the Pi’s SoC. Where the Pi’s SoC falls down is in hobbled I/O, especially due to the likes of the on-die AMBUS varients. NICAM is dirt simple ancient technology, which lets them “get away” with using the Zynq. Replacing NICAM is looooong overdue in the dinosaur broadcasting field.
I am working on a bot that auto-hacks all blogs and corrects ‘principle’ to ‘principal’ where required.
But maybe he engineers principles. He might even be a Principal Principle Engineer.
“…at 4:15AM the system went live without a hitch — and no one the wiser.” The highest praise possible for a live upgrade.
Yes, that is true. Alas that also sometimes applies to management – not noticing what a terrific job the nerds in the basement did. I hope this BBC R&D team got a pat on the back for this. Must have been a nail-biter.
This is why you make live upgrade projects sound scary to mangement: partially so they consider if they actually need to live-upgrade, partially so if it blows up everyone knew the risks, and partially so everyone knows what _didn’t_ happen when you did a competent job.
In this case, it could be very scary if it broke… http://mentalfloss.com/uk/trivia/34175/how-bbc-radio-4-could-be-saving-us-from-nuclear-war
Management never understands when thing go smoothly in a case like this just how hard it was to pull off and consequently the only pats that count are the ones the team gives itself, and those of their peers.
Management often rewards the teams that have a lot of self-induced obstacles to overcome, especially if the fail-team has to work longer, non-comp’ed evenings and weekends to produce an inferior result. It’s not what you know, but how hard it looked to accomplish, that gets the rewards.
I work as a software developer in television broadcast automation, and I have to say that the management of most of our customers are VERY aware of the risks of a live upgrade. The BBC would certainly be aware, there are strict rules for the maximum duration of black on air for television and silence on air for FM radio in the UK.
Most well planned live upgrades happen on Sunday nights. There is a dead zone of about 2-4 hours, in most countries, when you look at data for power consumption, telephone utilization, internet traffic. It is not a get out of jail free card, but it can help minimise perceived impact if there was a problem.
I just checked and they choose to do their upgrade – “We installed the new equipment overnight on 20 November 2015 and it went live across the country at 4:15am” – which in my book is a bit weird, the 20th was a Friday. I would guess that they finalised the prep work on the Friday and the actual change occurred on Saturday morning at 4:15am ? I would have delayed the change an additional 22 hours to be in that dead zone for major upgrades.
A well executed live update depends on having the right staff available as well, that might explain their unusual timing choice.
Implementing the change on Saturday morning gives them Sunday morning and “the dead zone” to fix any problems which weren’t serious enough to trigger a rollback on Saturday morning.
So which principles are this Justin Mitchell in charge of?
The main, primary, first ones.
He’s not in charge of principles, he’s engineering them. Sounds more like politics than engineering.
“When you do things right, people won’t be sure you’ve done anything at all.”
Yes, it’s a quote from Futurama.
That’s the best wisdom I have ever heard anywhere due to the universal truth of it.
Interesting how they used an off-the-shelf Digilent Zedboard development board. Given the low volume, it’s understandable of course. A Raspberry Pi, as suggested by some, would probably not work due to the parallel nature of the device (“It replaces the 3 data combiners (which combine the RDS data with the transmitter control information), the 6-channel audio coder, the CRC inserter, the CRC checker, the 6-channel audio decoder and the 3 data splitters (which separate the RDS data and transmitter control information). It also includes the NICAM test waveform generator.”)
Isn’t a FPGA more or less the ideal choice for what is essentially (part of) a software defined radio?
This is not SDR. It is compression. Processors (SoC’s) with co-processors (GPU’s) optimized to handle video and audio compression will almost always outperform what can be done with programmable logic (FPGA), most certainly when you try to do it on a soft core residing on programmable logic. FPGA’s excel at direct and near-direct conversion SDR because much of what needs to be done is in static state machines (combinational/sequential logic) and the I/O is very fast compared to a SoC.
An excellent job. However, the wiring inside the new unit doesn’t look professional at all.
:cough: http://www.darkroastedblend.com/2011/09/crazy-wiring-drb-series.html
Yeah, the radio listeners will have a hard time having to look at that ugly sight.
I love the front plate on those rack units. Very pretty.