[Cliff] is pushing VGA video out of a microcontroller at 800×600 resolution and 60 frames per second. This microcontroller has no video hardware. Before we get to the technical overview, here’s the very impressive demo.
The microcontroller in question is the STM32F4, a fairly powerful ARM that we’ve seen a lot of use in some pretty interesting applications. We’ve seen 800×600 VGA on the STM32F4 before, with a circles and text demo and the Bitbox console. [Cliff]’s build is much more capable, though; he’s running 800×600 @ 60FPS with an underclocked CPU and most (90%) of the microcontroller’s resources free.
This isn’t just a demo, though; [Cliff] is writing up a complete tutorial for generating VGA on this chip. It begins with an introduction to pushing pixels, and soon he’ll have a walkthrough on timing and his rasterization framework.
Just because [Cliff] has gone through the trouble of putting together these tutorials doesn’t mean you can’t pull out an STM Discovery board and make your own microcontroller video hacks. [Cliff] has an entire library of for graphics to allow others to build snazzy video apps.
That’s amazing!
There’s something magical about doing this stuff with a microcontroller…
actually that are things could be done with technology 40 years ago… so whats so magical about it?
It’s true, there were a few devices in the mid seventies — mostly from Tektronics or Evans and Sutherland — that could do this sort of resolution.
The reason this is interesting, to me, is that those systems used specialized and expensive hardware. Today, we’ve gotten to the point where you can do this almost entirely in software with inexpensive components. The future is amazing. :-)
But only because of hardware features like DMA…much less so software.
Prat.
Working series produced liquid fueled rockets were a reality in the 40’s. Amateurs doing the same on a smaller scale today is STILL FUCKING IMPRESSIVE.
And you have to be pretty stupid to think a microcontroller is comparable to extremely expensive dedicated electronics. But, you obviously are.
Generating a bitmap image doesn’t require much dedicated electronics. A couple of counters, and some memory, that’s it. Wozniak managed it in the 70’s with a handful of discrete parts.
That STM part would be a billion dollar supercomputer in the 1970’s. Of course you ca n do this if you put int he effort. Giving the transistor count int he STM, I would say, no, it is no stupid to try to compare it to “dedicated” hardware of the 70’s. BTW, the Tek and ES stuff was nice – if you liked one frame drawn at a time then the storage plate wiped and drawn again. The impressive units were from Vector General and they had monochrome and color, with color coming from changing electron energy to penetrate to R G and B phosphor layers (I used them at Boeing). Now, that was expensive, but the hardware was simple.
Cortex. Not ARM.
Prat, not pedant.
Cortex is ARM; ARM is the company who owns the copyright for the processors, and is also the name of the family of processors. the Cortex is one of these processor cores. (http://www.wikiwand.com/en/ARM_Cortex-M)
“Pentium, not x86.”
Thanks for making this useless but technically correct point.
This is really awesome!
I would like to point out, to whoever is interested, that going up a level in this microcontroller line to the STM32F429 (and higher) gives you a full LCD controller and DMA accelerator. What this allows you to do is assign a piece of RAM (probably external, as you need quite a bit of it) as a frame buffer, and simply do all your drawing stuff in this frame buffer. The accelerator will take care of synchronizing the framebuffer and the actual screen all by itself. I ported an old fake phong effect I did way back when in Pascal over to the F429, and it works like a charm.
The main issue of course is that VGA is ubiquitous, and standard. LCD controllers can be very tricky to get going with any specific tyep of panel.
Ah, but you don’t have to choose one or the other! The LCD controller on the 42x/43x parts is really just a parallel byte-streaming circuit, and you can totally have it generate VGA video. Just program the timing to conform to a VESA mode and ignore the clock output. All you need is an external DAC — the seven-resistor R2R DAC I’m using would do nicely for 8-bit output.
I actually discuss the 42x parts briefly in part 2 of my series, but it’s in a footnote, so you might have missed it: http://cliffle.com/article/2015/06/06/pushing-pixels/#fn4 In short, it seemed like a lot more fun to synthesize the video signals by hand. Among other things, it lets me generate complex graphics on the fly, during scanout, that I couldn’t fit in RAM for use with the LCD controller — even the larger RAM of the 42x series.
The 800×600 256-color flat-shaded polygons in the Glitch demo are a good example. That’d require around a megabyte of framebuffer if I were using the LCD controller.
(Of course, if you add external SDRAM, the sky’s the limit!)
That is really cool, and I am very much going to try that and impress my colleagues :).
I did a phong-shaded (actually environment-mapped) 320-face torus on an 800×480 16 bpp screen. This does indeed require close to 800k of RAM. Luckily the proto-board I use includes a generous amount of external SRAM. It doesn’t run at full framerate though. More like 20fps or so. Still cool to get some of the classic demo effects that required a desktop PC in the past out of a microcontroller.
There have been some quite impressive VGA demos for the STM324 already in the past: https://www.youtube.com/watch?v=KsToQmFndpg and https://www.youtube.com/watch?v=ymGCeG9_6c0.
Totally, those are amazing. This is why I’d lose a demo competition. :-)
In that second video, Ubik was not used as directed.
The STM32F3 board never gets any love…
I wonder if these demos are using the floating point unit, that might be why…
Just what the doctor ordered! This is a fascinating demo. I think development boards for microcontrollers are just what is needed to resurrect the once prosperous demoscene, what currently remains is people making procedural demos on PC within artificial size limits. The hardware of these development boards are easy to obtain and provide a fair platform for everyone to show their skills of pushing constrained hardware to beyond their perceived limits.
So lets get this revival going! The demoscene that used to be was a before I existed, but looking back at what people did amazes me. This time it should just be a done a bit different, very few of these teams seem to ever have documented the discoveries that they have made, instead we should build on a new approach very much like the modern open-source community, we should share our innovations to build a collective knowledge on which further improvements can be made. It could be like the scientific community, no more re-inventing the wheel or in the words of George Santayana: “Those who cannot remember the past are condemned to repeat it”.
In one of the last chapters of the book “Just for Fun” by Linus Torvalds & David Diamond there is explanation of how this sharing of innovation is a good thing, maybe someone else here can link to a more recent essay on the importance of sharing?
Just a small nitpick, the bitbox console has indeed been capable of 800×600 with free CPU (see blog post : http://bitboxconsole.blogspot.com/2014/10/new-kernel-modes-including-800x60056hz.html), what’s hogging the CPU is generating the pixels, not outputting them :) I just went for 56Hz to reduce the speed a bit, and didn’t use the FSMC due to it not being available on my part (405@100pins).
That said, the demo is pretty impressive indeed, kudos for that :)
Bitbox is awesome, no offence intended. I don’t have anything resembling a tile graphics engine, for example.
The main difference here, technically, is that I’m running at 160MHz.
None taken at _all !! I’ve seen your graphics lib on github (which I hadn’t seen before !) and I’m impressed with the intricate level of knowledge you have on that part.
(Oh, it’s worth noting that I evaluated the FSMC and you wouldn’t have liked the results anyway — see part two of my series.)
The irony of YouTube’s compression mangling the detail in this video is strong.