Bleep Remover Censors Those **** Bleeps

One of the more interesting cultural phenomena is the ‘bleep’ that replaces certain words in broadcasts, something primarily observed in the US. Although ostensibly applied to prevent susceptible minds from being exposed to the unspeakable horrors of naughty words, the applied 1 kHz censoring tone is decidedly loud and obnoxious enough that its entertainment level falls somewhere between ‘truck backing up’ and ‘loud claxon in busy traffic’. There is thus a definite argument to be made to censor the censoring beep to preserve one’s sanity, which is the goal of [Oona Räisänen]’s Bleep-be-gone project on GitHub.

Using a Perl-based wrapper, the versatile ffmpeg framework is used to filter a provided video that was afflicted with bleepitus, before outputting a pristine version where the infernal noise is replaced with blissful silence. This use of silence for censoring naughty words is incidentally becoming more commonplace over an ear-piercing beep, but a tool like Bleep-be-gone can be used to hasten the demise of its terror. Considering that the point of the 1 kHz back-up alarm beep is to draw a person’s attention to a piece of heavy equipment moving about, there is clearly no good reason why the replacement of a naughty word should warrant a similar drawing of attention.

Explore FFmpeg From The Comfort Of Your Browser

If you’re looking to manipulate video, FFmpeg is one of the most powerful tools out there. But with this power comes a considerable degree of complexity, and a learning curve that looks suspiciously like a brick wall. To try and make this incredible tool a bit less obtuse, [Sam Lavigne] has developed a web interface that lets you play around with FFmpeg’s vast collection of audio and video filters.

To try out a filter, you just need to select one from the window on the left and it will pop up in the central workspace. Here, the input, output, and any enabled filters will show up as boxes that can be virtually “wired” together. Selecting a filter will populate its options on the right hand side, with sliders and input boxes that allow you to play around with their parameters. When you want to see the final result, just click “Render Preview” and wait a bit.

If there was any downside, it seems like whatever box the site is running on the overhead of running in the browser doesn’t provide it a lot of horsepower. Even with the relatively low resolution of the demo videos available, the console output at the top of the page shows FFmpeg sometimes flirts with a processing speed measured in single-digit frames per second. Still, for a filter playground, it gets the job done. Perhaps the best part of the whole tool is that you can then copy your properly formatted command right out of the browser window and into your terminal so you can put it to work on your local files.

FFmpeg is one of those programs you should really be familiar with because it often proves useful in unexpected ways. The ability to manipulate audio and video with just a few keystrokes can really come in handy, and we’ve seen this open-source tool used for everything from compressing podcasts onto floppy disks to overlaying real-time environmental data onto a video stream.

Pushing The Limits Of A 16×2 LCD With Bad Apple!!

While low-contrast, blue-on-slightly-less-blue 16-character by 2-line LCDs are extremely popular, they really are made specifically for alphanumeric use. They do an admirable job of displaying a few characters, but they don’t exactly spring to mind as a display for non-character purposes. But displaying video on a 16×2 LCD is possible, as long as you’re willing to stretch the definition of “video” a bit and use some imagination while watching.

Normally, a 16×2 display can only display a single character in each spot, chosen from a fixed character set. But [arduinocelantano] was able to leverage the eight custom character slots the display allows to build up images from arbitrary 5×8 pixel bitmaps. After using ffmpeg to scale the original video to a viewport of eight characters, a Python program was used to turn every frame of the scaled video into code to generate the custom bitmaps for each chunk of the viewport. Even with the low refresh rate of the display and the shrunken frame size, the result is a recognizable video, helped no doubt by the choice of the shadow-puppet Bad Apple!! video. Check it out after the break to see how it looks.

We saw a similar rendering of the same video on LCD a while back; that effort was amazing in that it was an EEPROM-only implementation, along with a somewhat bigger LCD with better contrast. That project served as inspiration for [arduinocelantano]’s build here, which in some ways we think looks a bit better — perhaps it’s the inverted pixels. Either way, hats off to both builders for pushing past the normal constraints and teaching us something interesting.

Continue reading “Pushing The Limits Of A 16×2 LCD With Bad Apple!!

Incredibly Slow Films, Now Playing In Dazzling Color

Back in 2018 we covered a project that would break a video down into its individual frames and slowly cycle through them on an e-paper screen. With a new image pushed out every three minutes or so, it would take thousands of hours to “watch” a feature length film. Of course, that was never the point. The idea was to turn your favorite movie into an artistic conversation piece; a constantly evolving portrait you could hang on the wall.

[Manuel Tosone] was recently inspired to build his own version of this concept, and now thanks to several years of e-paper development, he was even able to do it in color. Ever the perfectionist, he decided to drive the seven-color 5.65 inch Waveshare panel with a custom STM32 board that he estimates can wring nearly 300 days of runtime out of six standard AA batteries, and wrap everything up in a very professional looking 3D printed enclosure. The end result is a one-of-a-kind Video Frame that any hacker would be proud to display on their mantle.

The Hackaday.IO page for this project contains a meticulously curated collection of information, covering everything from the ffmpeg commands used to process the video file into a directory full of cropped and enhanced images, to flash memory lifetime estimates and energy consumption analyses. If you’ve ever considered setting up an e-paper display that needs to run for long stretches of time, regardless of what’s actually being shown on the screen, there’s an excellent chance that you’ll find some useful nuggets in the fantastic documentation [Manuel] has provided.

We always love to hear about people being inspired by a project they saw on Hackaday, especially when we get to bring things full circle and feature their own take on the idea. Who knows, perhaps the next version of the e-paper video frame to grace these pages will be your own.

Continue reading “Incredibly Slow Films, Now Playing In Dazzling Color”

Audio Fingerprinting Skips A Show’s Intro, Reliably

Lacking a DVD drive, [jg] was watching a TV series in the form of a bunch of .avi video files. Of course, when every episode contains a full intro, it is only a matter of time before that gets too annoying to sit through.

Chapter breaks reliably inserted around the intro, even when it doesn’t always occur in the same place.

The usual method of skipping the intro on a plain video file is a simple one:

  1. Manually drag the playback forward past the intro.
  2. Oops that’s too far, bring it back.
  3. Ugh reversed it too much, nudge it forward.
  4. Okay, that’s good.

[jg] was certain there was a better way, and the solution was using audio fingerprinting to insert chapter breaks. The plain video files now have a chapter breaks around the intro, allowing for easy skipping straight to content. The reason behind selecting this method is simple: the show intro is always 52 seconds long, but it isn’t always in the same place. The intro plays somewhere within the first two to five minutes of an episode, so just skipping to a specific timestamp won’t do the trick.

The first job is to extract the audio of an intro sequence, so that it can be used for fingerprinting. Exporting the first 15 minutes of audio with ffmpeg easily creates a wav file that can be trimmed down with an audio editor of choice. That clip gets fed into the open-source SoundFingerprinting library as a signature, then each video has its audio track exported and the signature gets identified within it. SoundFingerprinting therefore detects where (down to the second) the intro exists within each video file.

Marking out chapter breaks using that information is conceptually simple, but ends up being a bit roundabout because it seems .avi files don’t have a simple way to encode chapters. However, .mkv files are another matter. To get around this, [jg] first converts each .avi to .mkv using ffmpeg then splices in the chapter breaks with mkvmerge. One important element is that the reformatting between .avi and .mkv is done without completely re-encoding the video itself, so it’s a quick process. The result is a bunch of .mkv files with chapter breaks around the intro, wherever it may be!

The script is available here for anyone to play with, and the project page is a good learning reference because [jg] kindly provides all the command-line options used for each tool. Interested in using audio fingerprinting in your own projects? Remember to also check out Olaf, the Overly Lightweight Acoustic Fingerprinting method that can be implemented in embedded systems and web browsers.

E-Paper Display Shows Movies Very, Very Slowly

How much would you enjoy a movie that took months to finish? We suppose it would very much depend on the film; the current batch of films from the Star Wars franchise are quite long enough as they are, thanks very much. But a film like Casablanca or 2001: A Space Odyssey might be a very different experience when played on this ultra-slow-motion e-paper movie player.

The idea of displaying a single frame of a movie up for hours rather than milliseconds has captivated [Tom Whitwell] since he saw [Bryan Boyer]’s take on the concept. The hardware [Tom] used is similar: a Raspberry Pi, an SD card hat with a 64 GB card for the movies, and a Waveshare e-paper display, all of which fits nicely in an IKEA picture frame.

[Tom]’s software is a bit different, though; a Python program uses FFmpeg to fetch and dither frames from a movie at a configurable rate, to customize the viewing experience a little more than the original. Showing one frame every two minutes and then skipping four frames, it has taken him more than two months to watch Psycho. He reports that the shower scene was over in a day and a half — almost as much time as it took to film — while the scene showing [Marion Crane] driving through the desert took weeks to finish. We always wondered why [Hitch] spent so much time on that scene.

With the proper films loaded, we can see this being an interesting way to really study the structure and flow of a good film. It’s also a good way to cut your teeth on e-paper displays, which we’ve seen pop up in everything from weather stations to Linux terminals.

Capture Device Firmware Hack Unlocks All The Pixels

According to [Mike Walters], the Elgato Cam Link 4K is a great choice if you’re looking for a HDMI capture device that works under Linux. But the bad news is, it wouldn’t work with any of the video conferencing software he tried to use it with because they expect the video stream to be in a different pixel format. For most people, that would probably have been the end of the story. But you’re reading this on Hackaday, so obviously he didn’t give up without a fight.

Early on, [Mike] found there was a software workaround for this exact issue. The problem isn’t that the Elgato can’t generate the desired format, it’s that the video conferencing programs just don’t know how to ask it to switch modes. The software fix is to create a dummy Video4Linux device and use that to change the format in real-time using ffmpeg. It’s a clever trick if you’ve got a conference call coming up in a few minutes, but it does waste CPU resources and adds some unnecessary hoop jumping.

Putting the device into bootloader mode.

Inspired by the software fix, [Mike] wondered if there was a way he could simply force the Elgato to output video in the desire format by default. He found a firmware dump for the device online, and found where the pixel formats were referenced by searching for their names in ASCII with hexdump. Looking through the source for the Linux USB Video Class (UVC) driver, he was then able to determine what the full 16 byte sequence should be for each video mode was so he could zero out the unwanted ones. Then it was just a matter of flashing his modified firmware back to the hardware.

But there was a problem: with the modified firmware installed, the device stopped working. After investigating the obvious culprits, [Mike] broke out the oscilloscope and hooked it up to the Elgato’s flash chip. It turns out that due to a bug in the program he was using, the SPI erase commands weren’t getting sent during the flash. This lead to corrupted firmware which was keeping the Elgato from booting. After making a pull request with his fixes, the firmware flashed without incident and the capture device now does double-duty as a webcam when necessary.

We could certainly think of easier and quicker was to roll your own webcam, but we’re glad that [Mike] took the time to modify his Elgato Cam Link 4K and document it. It’s a fantastic example of practical firmware hacking, even if you’re not in the market for a new high-definition video conferencing rig.