Modern microcontrollers often have specs comparable with or exceeding early gaming consoles. However, where they tend to fall short is in the video department, due to their lack of dedicated graphics hardware. With some nifty coding, though, great things can be achieved — as demonstrated by [TEC_IST]’s project that gets the RP2040 outputting 1080p video over HDMI.
The project builds on earlier work that saw the RP2040 outputting digital video over DVI. [TEC_IST] realized that earlier methods already used up 30% of the chip’s processing power just to reach 320×240 output. To get to 1080p resolution would require a different tack. The idea involved using the 32-bit architecture of the RP2040 to output a greater data rate to suit the higher resolution. The RP2040 can do a 32-bit move instruction in a single clock cycle, which, with 30 GPIO pins, would be capable of a data rate of 3.99 Gbits/second at the normal 133 MHz clock speed. That’s more than enough for 1080p at 60 Hz with a 24-bit color depth.
Due to the limitations of the chip, though, some extra hardware would be required. [TEC_IST] has drawn up a design that uses external RAM as a framebuffer, while using shift registers and other supporting logic to handle dumping out signals over HDMI. This would just leave the RP2040 to handle drawing new content, without having to redraw existing content every frame.
[TEC_IST] has shared the design for a potential 1080p HDMI output board for the RP2040 on GitHub and is inviting comment from the broader community. They’re yet to be built and tested, so it’s all theoretical at this stage. Obviously, a lot of heavy lifting is being done off-board the microcontroller here, but it’s still fun to think of such a humble chip doing such heavy-duty video output.
Neat but click bait. TLDR no
Definitely quite suspicious, why is tec ist not going the short distance himself, leaving it up to the community, at least without coming up with a very simple PoC that doesn’t need the DRAM, eg a simple procedurally rendered colour square, or anything… this is a thought experiment, and a baked together schematic, nothing more.
Besides, claiming the single instruction 32bit xfer is a far throw from making them appear on the output pins in that same time span.
The output part would (presumably) be handled by PIO.
“Modern microcontrollers”?
2023 nanoconctrollers?
First Snapdragon 7+ Gen 2 tests show Qualcomm is a master of sustained performance now.
https://www.phonearena.com/news/snapdragon-7-gen-2-benhcmark-performance-tests_id146549
Snap what?
Since when is a Snapdragon 7 considered a microcontroller? You are aware of the difference between the different ARM series (Cortex A vs Cortex M), right?
Or I completely fail to see the relevance of your comment.
Why would you want 1080p from an underpowered MCU?
Because you did not want to sign an NDA for access to a chip that supports HDMI on an open hardware design. Now you can plonk down a RPI chip and a bit of RAM and not give money to a company that will not give you access to documentation unless you are ordering 10K+ chips and sign away your ability to talk about what is in the documentation. Your controller will not be able to HDCP, but if you think about how broken it is, it is just wasting massive amounts of power on a global scale.
That’s a very silly (and a very expensive!) solution for that. If you for some reason require HDMI output, there are hardware driver chips that will take parallel video and output HDMI for you.
E.g. TFP410 from TI, Analog Devices has a similar IC and many others.
Also most small FPGAs will be able to do this and there are open cores for HDMI encoding available.
Heck, there is even a ready made solution using RPi Zero for this – taking parallel data and outputting HDMI using the RPi’s output. Granted, not open hardware (the Broadcom SoC isn’t open) but everything else, including the sw is.
I guess it’s about pushing the limits of what’s possible with dirt cheap hardware. Wouldn’t be many years ago that 1080p was unthinkable from a $1 MCU. Personally I think it’s fascinating, even though it’s not something I’d use.
“underpowered MCU” is redundant. The main thing that distinguishes MCUs from SOCs is that MCUs are less powerful (and therefore, cheaper, simpler, and more power efficient).
Projects like this showcase how far we’ve come. Tasks that used to be reserved for powerful SOCs and desktop systems are now possible on dirt-cheap MCUs. In my opinion, MCUs are at the forefront of computing just as much as the most expensive graphics cards and desktop processors. While those things determine what your computer is capable of, MCUs define what kind of technology we can put into everything else – your medical implants, your light bulbs, and your birthday cards.
Some high pin count 32 bit PIC MCUs have dedicated hardware for displays, accelerate drawing etc, but admittedly it’s not 1080p. And since it’s a peripheral it works independently from the main MCU. But it would be capable of period correct graphics at least ie 320×240 with 256 or even 15 bit color ;)
Wait, this is all based on being able to do a move instruction in one cycle. Isn’t that register to register, anything memory mapped is going to be a load or store instruction, which I’m pretty sure isn’t single cycle. Plus, if all you’re doing is move instructions, that doesn’t leave anything like the obviously required branch somewhere. So you’re just filling the entire ram with “mov rx,ry” and hoping it does something sane when it hits the end of the program ram?