ESP32 Clock Pushes Outrun Graphics Over Composite

We’ve covered plenty of clocks powered by the ESP32, but this one from [Marcio Teixeira] is really something special. Rather than driving a traditional physical display, the microcontroller is instead generating a composite video signal of an animated digital clock. This could be fed into whatever device you wish, but given the 80’s synthwave style it’s pumping out, you’ll probably want to find a suitably retro CRT to do it justice.

Specifically this is a variant of the “Dali” clock, where each digit seems to melt and morph into its successor. Though his version doesn’t necessarily share code with all the previous iterations, [Marcio] does credit the developers who have pulled off similar visual tricks going all the way back to 1979. Given the vintage of this particular animation, the neon skyline and infinite scrolling grid certainly feel like a perfect fit.

Want to add a little vaporwave vibe to your own workbench? Assuming you’ve already got a 80s style CRT, all you need is an ESP32 and two wires stuck into the composite video port. One goes to ground, and the other goes to the chip’s analog pin. Once everything is powered up, you’ll be able to configure the clock with a web-based interface. It doesn’t get much easier than that.

In the documentation, [Marcio] calls out a few open source projects which were instrumental to getting his clock off the ground. The pioneering work [bitluni] did to get video out of the ESP32 is something of a given, but he also sends a hat tip to [rossumur] for his collection of 8-bit game console emulators written for the microcontroller. Projects like this are a fantastic example of what’s possible when a community works together to truly push the envelope.

Continue reading “ESP32 Clock Pushes Outrun Graphics Over Composite”

Super 8 Camera Brought Back To Life

The Super 8 camera, while a groundbreaking video recorder in its time, is borderline unusable now. Even if you can get film for it (and afford its often enormous price), it still only records on 8mm film which isn’t exactly the best quality of film around, not to mention that a good percentage of these cameras couldn’t even record audio. They were largely made obsolete by camcorders in the late ’80s and early ’90s, although some are still used for niche artistic purposes. If you’d rather not foot the bill for the film, though, you can still put one of these to work with the help of a Raspberry Pi.

[befinitiv] has a knack for repurposing antique analog equipment like this while preserving its aesthetic. While the bulk of the space inside of this camera would normally be used for housing film, this makes a perfect spot to place a Raspberry Pi Zero, a rechargeable battery, and a power converter circuit all in a 3D printed enclosure that snaps into the camera just as a film roll would have. It uses the Pi camera module but still makes use of the camera’s built in optics which include a zoom function. [befinitiv] also incorporated the original record button so that from the outside this looks like a completely unmodified Super 8 camera.

The camera can connect to a WiFi network and can stream live video to a computer, or it can record video files to an internal SD card. As a bonus, thanks to the power converter circuit, it is also capable of charging a cell phone. [befinitiv] notes that many of the aesthetic properties of 8 mm film seem to be preserved when using this method, and he has several theories as to why but no definitive answer. If you’d like to take a look at some of his other projects like this, check out this analog camera that is now able to take digital pictures. Continue reading “Super 8 Camera Brought Back To Life”

Mastering Stop Motion Through Machine Learning

Stop motion animation is notoriously difficult to pull off well, in large part because it’s a mind-numbingly slow process. Each frame in the final video is a separate photograph, and for each one of those, the characters and props need to be moved the appropriate amount so that the final result looks smooth. You don’t even want to know how long Ben Wyatt spent working on Requiem for a Tuesday, though to be fair, it might still get done before the next Avatar.

But [Nick Bild] thinks his latest project might be able to improve on the classic technique with a dash of artificial intelligence provided by a Jetson Xavier NX. Basically, the Jetson watches the live feed from the camera, and using a hand pose detection model, waits until there’s no human hand in the frame. Once the coast is clear, it takes a shot and then goes back to waiting for the next hands-free opportunity. With the photographs being taken automatically, you’re free to focus on getting your characters moving around in a convincing way.

If it’s still not clicking for you, check out the video below. [Nick] first shows the raw unedited video, which primarily consists of him moving three LEGO figures around, and then the final product produced by his system. All the images of him fiddling with the scene have been automatically trimmed, leaving behind a short animated clip of the characters moving on their own.

Now don’t be fooled, it’s still going to take awhile. By our count, it took two solid minutes of moving around Minifigs to produce just a few seconds of animation. So while we can say its a quicker pace than with traditional stop motion production, it certainly isn’t fast.

Machine learning isn’t the only modern technology that can simplify stop motion production. We’ve seen a few examples of using 3D printed objects instead of manually-adjusted figures. It still takes a long time to print, and of course it eats up a ton of filament, but the mechanical precision of the printed scenes makes for a very clean final result.

Continue reading “Mastering Stop Motion Through Machine Learning”

Save That Old VGA Monitor From The Trash

It’s quite a while since any of us unpacked a brand new VGA monitor, but since so many machines still have the ability to drive them even through an inexpensive adaptor they’re still something that finds a use. With so many old VGA flat panel monitors being tossed away they even come at the low low price of free, which can’t be argued with. CNXSoft’s [Jean-Luc Aufranc] was tasked with fixing a dead one, and wrote an account of his progress.

Seasoned readers will no doubt be guessing where this story will lead, as when he cracked it open and exposed the PSU board there was the tell-tale puffiness of a failed electrolytic capacitor. For relative pennies a replacement was secured, and the monitor was fixed. As repair hacks go it’s a straightforward one, but still worth remarking because a free monitor is a free monitor.

We called the demise of VGA back in 2016, and have seen no reason to go back on that. But for those of us left with a few legacy monitors it’s worth remembering that DVI and thus the DVI compatibility mode of HDMI is little more than a digitised version of the R, G, and B channels you’d find on that trusty blue connector. Maybe that little dongle doesn’t make such a bad purchase, and of course you can also use it as an SDR if you want.

Creating Video From A ROM

We’re used to computers with display screens, yet how many of us have created the circuitry to drive one directly? Sure, we’ve coded up an SPI display driver on a microcontroller, but create the hardware to generate a usable video signal? That’s a little more difficult. [Jdh] has given it a go though, with a TTL video card.

In this case it’s not a card so much as a collection of breadboards, but all the logic is there to generate the complex array of video timings necessary for synchronisation, and to output the bits sequentially at the right voltage levels for the analogue monitor. It’s worth pointing out though that it’s not a composite video signal that’s being created sinceit’s monochrome only with no subcarrier.

In the end he encounters the problem that his ROM isn’t fast enough for the pixel rate and thus the image has artefacts, but it does at least produce a recognisable and readable something on the screen. Old hands in the video business might point out that analogue TVs were a bit forgiving when it came to exact timings and line counts so the circuit could quite possibly be simplified, and also that trading away some of the resolution might fix the ROM speed issue. But it’s an impressive piece of work, and should be of particular interest for anyone interested in how video works.

Fans of video cards on breadboards should also check out [Ben Eater’s] 7400-series video card.

Continue reading “Creating Video From A ROM”

ESP32 Video Input Using I2S

Computer engineering student [sherwin-dc] had a rover project which required streaming video through an ESP32 to be accessed by a web server. He couldn’t find documentation for the standard camera interface of the ESP32, but even if he had it, that approach used too many I/O pins. Instead, [sherwin-dc] decided to shoe-horn a video into an I2S stream. It helped that he had access to an Altera MAX 10 FPGA to process the video signal from the camera. He did succeed, but it took a lot of experimenting to work around the limited resources of the ESP32. Ultimately [sherwin-dc] decided on QVGA resolution of 320×240 pixels, with 8 bits per pixel. This meant each frame uses just 77 KB of precious ESP32 RAM.

His design uses a 2.5 MHz SCK, which equates to about four frames per second. But he notes that with higher SCK rates in the tens of MHz, the frame rate could be significantly higher — in theory. But considering other system processing, the ESP32 can’t even keep up with four FPS. In the end, he was lucky to get 0.5 FPS throughput, but that was adequate for purposes of controlling the rover (see animated GIF below the break). That said, if you had a more powerful processor in your design, this technique might be of interest. [Sherwin-dc] notes that the standard camera drivers for the ESP32 use I2S under the hood, so the concept isn’t crazy.

We’ve covered several articles about generating video over I2S before, including this piece from back in 2019. Have you ever commandeered a protocol for “off-label” use?

Continue reading “ESP32 Video Input Using I2S”

Incredibly Slow Films, Now Playing In Dazzling Color

Back in 2018 we covered a project that would break a video down into its individual frames and slowly cycle through them on an e-paper screen. With a new image pushed out every three minutes or so, it would take thousands of hours to “watch” a feature length film. Of course, that was never the point. The idea was to turn your favorite movie into an artistic conversation piece; a constantly evolving portrait you could hang on the wall.

[Manuel Tosone] was recently inspired to build his own version of this concept, and now thanks to several years of e-paper development, he was even able to do it in color. Ever the perfectionist, he decided to drive the seven-color 5.65 inch Waveshare panel with a custom STM32 board that he estimates can wring nearly 300 days of runtime out of six standard AA batteries, and wrap everything up in a very professional looking 3D printed enclosure. The end result is a one-of-a-kind Video Frame that any hacker would be proud to display on their mantle.

The Hackaday.IO page for this project contains a meticulously curated collection of information, covering everything from the ffmpeg commands used to process the video file into a directory full of cropped and enhanced images, to flash memory lifetime estimates and energy consumption analyses. If you’ve ever considered setting up an e-paper display that needs to run for long stretches of time, regardless of what’s actually being shown on the screen, there’s an excellent chance that you’ll find some useful nuggets in the fantastic documentation [Manuel] has provided.

We always love to hear about people being inspired by a project they saw on Hackaday, especially when we get to bring things full circle and feature their own take on the idea. Who knows, perhaps the next version of the e-paper video frame to grace these pages will be your own.

Continue reading “Incredibly Slow Films, Now Playing In Dazzling Color”