When you think 1080p video, you probably don’t think STM32 microcontroller. And yet! [Gabriel Cséfalvay] has pulled off just that through the creative use of on-chip peripherals. Sort of.
The build is based around the STM32L4P5—far from the hottest chip in the world. Depending on the exact part you pick, it offers 512 KB or 1 Mbyte of flash memory, 320 KB of SRAM, and runs at 120 MHz. Not bad, but not stellar.
Still, [Gabriel] was able to push 1080p at a sort of half resolution. Basically, the chip is generating a 1080p widescreen RGB VGA signal. However, to get around the limited RAM of the chip, [Gabriel] had to implement a hack—basically, every pixel is RAM rendered as 2×2 pixels to make up the full-sized display. At this stage, true 1080p looks achievable, but it’ll be a further challenge to properly fit it into memory.
Output hardware is minimal. One pin puts out the HSYNC signal, another handles VSYNC. The same pixel data is clocked out over R, G, and B signals, making all the pixels either white or black. Clocking out the data is handled by a nifty combination of the onboard DMA functionality and the OCTOSPI hardware. This enables the chip to hit the necessary data rate to generate such a high-resolution display.
There’s more work to be done, but it’s neat to see [Gabriel] get even this far with such limited hardware. We’ve seen others theorize similar feats on chips like the RP2040 in the Pi Pico, too. Video after the break.
So its 1920×1080 / 2 –> 960×540 –> Well I sure hope they can do it.
I can do 800×600 –> that is 592×600 with a single pic24ep along with individual fg/bg color per line.
The SPI output has 2pixel width delay between bytes reducing it from 80chr wide to 74chr wide with an 8x12bit chr.
https://youtu.be/bj2c58IDF0g
Reminds me of [Cliff Biffle]’s m4vga-rs: 800×600@60FPS on an STM32F4
I love seeing low spec hardware (only 120MHz ha) pushed to its limits. Looks like it has the bandwidth to push 1080p, but not enough RAM for the full video buffer.
I’m working on it, don’t worry. :)
Maybe try character- or tile-based rendering like the old 8-bit computers had. Might be able to squeeze some color out of it too.
This is a significant achievement, however I would rather aim at something more usable, such as turning old microcontrollers into “graphics cards” for old VGA monitors, with on board primitives and shapes, so that one could interface them with other controllers (example: Arduino, ESP*, etc) to have high res output without taxing them with the load that would imply.
Example: use “leds” on an old 1024×768 screen to show the state of GPIOs, or gauges for analog values, etc. The main controller would only ask for putting leds, gauges, etc. at xy positions, then send them values to show, with timed updates, without being forced to keep them in memory and redraw everything. This would also free vauable GPIO pins as the entire communication could happen through i2c or similar low pin count protocol. I’ve seen libraries doing that with small LCDs and OLED modules, however I don’t think they support PC monitors, which can be bought used for pennies or even got free.
That’s a nice idea, never thought of used screen!
Although you could argue that if you have a powerful enough microcontroller to output on a screen some basic values, that means you don’t need much power to get an all in one solution ..
The idea would be to build something that could be used with multiple platforms without messing on those platforms with graphics instructions that could be either too heavy to execute, or without using pins that could be useful for other things.
Say I want to put on a big monitor a few gauges showing the values I’m sampling from a MCU analog inputs; I should implement VGA (or HDMI, but not cost effective) output using a few pins, then code to do that which I’d have to change for every other MCU class I’d port the software too, which would tax the chip which already has to do other things.
Now Imagine I use i2c to send a string to the controller like this “G:0:1:120,40,100,80:L:0-100:250” which means: “create gauge #0 using theme 1 (scale appearance) at X=120 and X=40 with width and height respectively of 100 and 80 pixels, linear scale from 0 to 100 and run speed of 250 ms, then use a string like: “G:0:n” to update it with value n. That would make things a lot simpler, and easily portable to other hardware.
In other words, we’d be creating a graphics card with drivers and graphics primitives.
The catch is that it needs two chips, but once one finds a very low cost one that does only say i2c in and gpio out (no ADC, no BT, no WiFi, no audio, etc), I think the benefits (portability, less load, less used pins) would easily counter the added cost.
For that you could just use a cheap SBC like a pi zero 2 and a display. It could drive the display much better and could still interface with the MCU with whatever digital interface you want.
That way you need no graphics code on the MCU, not even sending commands like you mentioned, all it would need to do is send the data and a program on the SBC would handle parsing the data and displaying it.
This is a good idea. Maybe it could even have analog inputs.
Yes, but that function would be delegated to the other controller.
The point is that the graphics dedicated controller becomes by any means a graphics card, so it can be instructed to draw for example buttons, gauges, meters, oscilloscope-like areas, 7 segment digits, etc, then accept input, say via i2c, to update them accordingly. Not unlike PCs where we use a GPU to draw graphics and a CPU to run software that talks with it.
The point is that by doing so one chip works just as a graphics controller, while, once we have a standard protocol to talk to it, we can use whichever device we want to display data on the screen, including more powerful SBCs or ultra small ones that could never have either enough pins or enough computing power to draw graphics (example: smallest ATTinys).
Imagine an IPKVM out of an ESP32.