ESP32 Video Input Using I2S

Computer engineering student [sherwin-dc] had a rover project which required streaming video through an ESP32 to be accessed by a web server. He couldn’t find documentation for the standard camera interface of the ESP32, but even if he had it, that approach used too many I/O pins. Instead, [sherwin-dc] decided to shoe-horn a video into an I2S stream. It helped that he had access to an Altera MAX 10 FPGA to process the video signal from the camera. He did succeed, but it took a lot of experimenting to work around the limited resources of the ESP32. Ultimately [sherwin-dc] decided on QVGA resolution of 320×240 pixels, with 8 bits per pixel. This meant each frame uses just 77 KB of precious ESP32 RAM.

His design uses a 2.5 MHz SCK, which equates to about four frames per second. But he notes that with higher SCK rates in the tens of MHz, the frame rate could be significantly higher — in theory. But considering other system processing, the ESP32 can’t even keep up with four FPS. In the end, he was lucky to get 0.5 FPS throughput, but that was adequate for purposes of controlling the rover (see animated GIF below the break). That said, if you had a more powerful processor in your design, this technique might be of interest. [Sherwin-dc] notes that the standard camera drivers for the ESP32 use I2S under the hood, so the concept isn’t crazy.

We’ve covered several articles about generating video over I2S before, including this piece from back in 2019. Have you ever commandeered a protocol for “off-label” use?

Continue reading “ESP32 Video Input Using I2S”

Incredibly Slow Films, Now Playing In Dazzling Color

Back in 2018 we covered a project that would break a video down into its individual frames and slowly cycle through them on an e-paper screen. With a new image pushed out every three minutes or so, it would take thousands of hours to “watch” a feature length film. Of course, that was never the point. The idea was to turn your favorite movie into an artistic conversation piece; a constantly evolving portrait you could hang on the wall.

[Manuel Tosone] was recently inspired to build his own version of this concept, and now thanks to several years of e-paper development, he was even able to do it in color. Ever the perfectionist, he decided to drive the seven-color 5.65 inch Waveshare panel with a custom STM32 board that he estimates can wring nearly 300 days of runtime out of six standard AA batteries, and wrap everything up in a very professional looking 3D printed enclosure. The end result is a one-of-a-kind Video Frame that any hacker would be proud to display on their mantle.

The Hackaday.IO page for this project contains a meticulously curated collection of information, covering everything from the ffmpeg commands used to process the video file into a directory full of cropped and enhanced images, to flash memory lifetime estimates and energy consumption analyses. If you’ve ever considered setting up an e-paper display that needs to run for long stretches of time, regardless of what’s actually being shown on the screen, there’s an excellent chance that you’ll find some useful nuggets in the fantastic documentation [Manuel] has provided.

We always love to hear about people being inspired by a project they saw on Hackaday, especially when we get to bring things full circle and feature their own take on the idea. Who knows, perhaps the next version of the e-paper video frame to grace these pages will be your own.

Continue reading “Incredibly Slow Films, Now Playing In Dazzling Color”

Cablecam Is An Exercise In System Integration

Drones have become the standard for moving aerial camera platforms, but another option that sees use in the professional world are cable cameras. As an exercise in integrating mechanics, electronics, and software, [maxipalay] created his own Cablecam.

Cablecam is build around a pair of machined wood plates, with some pulleys and motor reduction gearing between them. A brushless hobby motor moves the platform along the rope/cable, driven a drone ESC. Since the ESC doesn’t have a reverse function, [maxipalay] used four relays controlled by an Arduino to swap around the connections of two of the motor wires to reverse direction. The main onboard controller is a Raspberry Pi, connected to a camera module mounted on a two-axis gimbal for stabilization. A GPS module was also added for positioning information on long cables.

The base station is built around an Nvidia Jetson Nano connected to a 7″ screen mounted in a plastic case. Video, telemetry and control signals are communicated using the open-source Wifibroadcast protocol. This uses off-the-shelf WiFi hardware in connectionless mode to broadcast UDP packets, and avoids the lengthy WiFi reconnection process every time a connection drops out. The motion of Cablecam can be controlled manually using a potentiometer on the control station, or use the machine vision capabilities of the Jetson to automatically track and follow people.

We’ve seen several cable robots over the years, including a solar-powered sensor platform that resembles a sloth.

Avoid Awkward Video Conference Situations With PIR And Arduino

Working from home with regular video meetings has its challenges, especially if you add kids to the mix. To help avoid embarrassing situations, [Charitha Jayaweera] created Present!, a USB device to automatically turn of your camera and microphone if you suddenly need to leave your computer to maintain domestic order.

Present consists of just a PIR sensor and Arduino in a 3D printed enclosure to snap onto your monitor. When the PIR sensor no longer detects someone in range, it sends a notification over serial to a python script running on the PC to switch off the camera and microphone on Zoom (or another app). It can optionally turn these back on when you are seated again. The cheap HC-SR501 PIR module’s range can also be adjusted with a trimpot for your specific scenario. It should also be possible to shrink the device to the size of the PIR module, with a small custom PCB or one of the many tiny Arduino compatible dev boards.

For quick manual muting, check out the giant 3D printed mute button. Present was an entry into the Work from Home Challenge, part of the 2021 Hackaday Prize.

Finding Fractals In The 1930’s

The mesmerizing properties of fractals are surprising as their visual complexity often arises from simple equations. [CodeParade] set out to show how simple a fractal is by creating them using technology from the 1930s. The basic idea is based on projectors and cameras, which were both readily available and widely used in television (CRT projectors were in theaters by 1938, though they weren’t in color until the 1950s).

By projecting two overlapping images on the wall, pointing a camera at the resulting image, and then feeding it back into the projectors, you get some beautiful fractals. [CodeParade] doesn’t have a projector, much less two. So he did what any hacker might do and came up with a clever workaround. He made a simple app that “projects” onto his monitor and all he has to do is point an external webcam at the screen. The resulting analog fractals are quite beautiful and tactile. Rather than tweaking a variable and recompiling, you simply just add a finger or move the camera to introduce new noise that quickly becomes signal.

Better yet, there’s a web version that you can play around with right now. For more fractals implemented in hardware rather than software, there’s this FPGA with a VHDL Mandelbrot set we covered.

Continue reading “Finding Fractals In The 1930’s”

This Horrifying Robot Is Here To Teach You A Lesson

No, despite what it might look like, this isn’t some early Halloween project. The creepy creation before you is actually a tongue-in-cheek “robot” created by the prolific [Nick Bild], a topical statement about companies asking their remote workers to come back into the office now that COVID-19 restrictions are being lifted. Why commute every day when this ultra realistic avatar can sit in for you?

OK, so maybe it’s not the most impressive humanoid creation to ever grace the pages of Hackaday. But if you’re looking to spin up a simple telepresence system, you could do worse than browsing through the Python source code [Nick] has provided. Using a Raspberry Pi 4, a webcam, and a microphone, his client-server architecture combines everything the bot sees and hears into a simple page that can be remotely accessed with a web browser.

Naturally this work from home (WFH) bot wouldn’t be much good if it was just a one-way street, so [Nick] has also added a loudspeaker that replays whatever he says on the client side. To prevent a feedback loop, his software includes a function that toggles which direction the audio stream goes in by passing the appropriate commands to the bot over SSH; a neat trick to keep in mind for your own, less nightmarish, creations.

If you’re looking for something a bit more capable and have some cardboard laying around, this DIY telepresence mount for your phone might be a good place to start.

Continue reading “This Horrifying Robot Is Here To Teach You A Lesson”

Video De-shaker Software Measures Linear Rail Quality

Here’s an interesting experiment that attempts to measure the quality of a linear rail by using a form of visual odometry, accomplished by mounting a camera on the rail and analyzing the video with open-source software usually used to stabilize shaky video footage. No linear rail is perfect, and it should be possible to measure the degree of imperfection by recording video footage while the camera moves down the length of the rail, and analyzing the result. Imperfections in the rail should cause the video to sway a proportional amount, which would allow one to characterize the rail’s quality.

To test this idea, [Saulius] attached a high-definition camera to a linear rail, pointed the camera towards a high-contrast textured pattern (making the resulting video easier to analyze), and recorded video while moving the camera across the rail at a fixed speed. The resulting video gets fed into the Deshaker plugin for VirtualDub, of which the important part is the deshaker.log file, which contains X, Y, rotate, and zoom correction values required to stabilize the video. [Saulius] used these values to create a graph characterizing the linear rail’s quality.

It’s a clever proof of concept, especially in how it uses no special tools and leverages a video stabilizing algorithm in an unusual way. However, the results aren’t exactly easy to turn into concrete, real-world measurements. Turning image results into micrometers is a matter of counting pixels, and for this task video stabilizing is an imperfect tool, since the algorithm prioritizes visual results instead of absolute measurements. Still, it’s an interesting experiment, and perfectly capable of measuring rail quality in a relative sense. Can’t help but be a bit curious about how it would profile something like these cardboard CNC modules.