Make Your Bookshelf Clickable

We’ll confess that we have a fondness for real books and plenty of them. So does [James], and he decided he needed a way to take a picture of his bookshelves and make each book clickable to find more information. This is one of those things that sounds fairly simple until you decide to do it. You can try an example of the results and then go back and read about the journey it took to get there.

There are several subtasks involved. First, you want to identify each book’s envelope. It wouldn’t do to click on the Joy of Cooking and get information about Remembrance of Things Past.

The next challenge is reading the title of the book. This can be tricky. Fonts differ. The book could be upside down. Some titles go cross the spine, but most go vertically. The remainder of the task is fairly easy. If you know the region and the title, you can easily find a link (for Google Books, in this case) and build an SVG overlay that maps the areas for each book to the right link.

Continue reading “Make Your Bookshelf Clickable”

Cracking The Spotify Code

If you’ve used Spotify, you might have noticed a handy little code that it can generate that looks like a series of bars of different heights. If you’re like [Peter Boone], such an encoding will pique your curiosity, and you might set out to figure out how they work.

Spotify offers a little picture that, when scanned, opens almost anything searchable with Spotify. Several lines are centered on the Spotify logo with eight different heights, storing information in octal. Many visual encoding schemes encode some URI (Uniform Resource Identifier) that provides a unique identifier for that specific song, album, or artist when decoded. Since many URIs on Spotify are pretty long (one example being spotify :show:3NRV0mhZa8xeRT0EyLPaIp which clocks in at 218 bits), some mechanism is needed to compress the URIs down to something more manageable. Enter the media reference, a short sequence encoding a specific URI, generally under 40 bits. The reference is just a lookup in a database that Spotify maintains, so it requires a network connection to resolve. The actual encoding scheme from media reference to the values in the bars is quite complex involving CRC, convolution, and puncturing. The CRC allows the program to check for correct decoding, and the convolution enables the program to have a small number of read errors while still having an accurate result. Puncturing is just removing bits to reduce the numbers encoded, relying on convolution to fill in the holes.

[Peter] explains it all in his write-up helpfully and understandably. The creator of the Spotify codes stopped by in the comments to offer some valuable pointers, including pointing out there is a second mode where the lines aren’t centered, allowing it to store double the bits. [Peter] has a python package on Github with all the needed code for you to start decoding. Maybe you can incorporate a Spotify code scanner into your custom Spotify playing mini computer.

What Exactly Is A Gaussian Blur?

Blurring is a commonly used visual effect when digitally editing photos and videos. One of the most common blurs used in these fields is the Gaussian blur. You may have used this tool thousands of times without ever giving it greater thought. After all, it does a nice job and does indeed make things blurrier.

Of course, we often like to dig deeper here at Hackaday, so here’s our crash course on what’s going on when you run a Gaussian blur operation. Continue reading “What Exactly Is A Gaussian Blur?”

Putting Perseverance Rover’s View Into Satellite View Context

It’s always fun to look over aerial and satellite maps of places we know, seeing a perspective different from our usual ground level view. We lose that context when it’s a place we don’t know by heart. Such as, say, Mars. So [Matthew Earl] sought to give Perseverance rover’s landing video some context by projecting onto orbital imagery from ESA’s Mars Express. The resulting video (embedded below the break) is a fun watch alongside the technical writeup Reprojecting the Perseverance landing footage onto satellite imagery.

Some telemetry of rover position and orientation were transmitted live during the landing process, with the rest recorded and downloaded later. Surprisingly, none of that information was used for this project, which was based entirely on video pixels. This makes the results even more impressive and the techniques more widely applicable to other projects. The foundational piece is SIFT (Scale Invariant Feature Transform), which is one of many tools in the OpenCV toolbox. SIFT found correlations between Perseverance’s video frames and Mars Express orbital image, feeding into a processing pipeline written in Python for results rendered in Blender.

While many elements of this project sound enticing for applications in robot vision, there are a few challenges touched upon in the “Final Touches” section of the writeup. The falling heatshield interfered with automated tracking, implying this process will need help to properly understand dynamically changing environments. Furthermore, it does not seem to run fast enough for a robot’s real-time needs. But at first glance, these problems are not fundamental. They merely await some motivated people to tackle in the future.

This process bears some superficial similarities to projection mapping, which is a category of projects we’ve featured on these pages. Except everything is reversed (camera instead of video projector, etc.) making the math an entirely different can of worms. But if projection mapping sounds more to your interest, here is a starting point.

[via Dr. Tanya Harrison @TanyaOfMars]

Continue reading “Putting Perseverance Rover’s View Into Satellite View Context”

sample of automatically generated comics

Read Your Movies As Automatically Generated Comic Books

A research paper from Dalian University of Technology in China and City University of Hong Kong (direct PDF link) outlines a system that automatically generates comic books from videos. But how can an algorithm boil down video scenes to appropriately reflect the gravity of the scene in a still image? This impressive feat is accomplished by saving two still images per second, then segments the frames into scenes through analysis of region-of-interest and importance ranking.

movie to comic book pipeline diagram

For its next trick, speech for each scene is processed by combining subtitle information with the audio track of the video. The audio is analyzed for emotion to determine the appropriate speech bubble type and size of the subtitle text. Frames are even analyzed to establish which person is speaking for proper placement of the bubbles. It can then create layouts of the keyframes, determining panel sizes for each page based on the region-of-interest analysis.

The process is completed by stylizing the keyframes with flat color through quantization, for that classic cel shading look, and then populating the layouts with each frame and word balloon.

The team conducted a study with 40 users, pitting their results against previous techniques which require more human intervention and still besting them in every measure. Like any great superhero, the team still sees room for improvement. In the future, they would like to improve the accuracy of keyframe selection and propose using a neural network to do so.

Thanks to [Qes] for the tip!

Colorizing Images With The Help Of AI

The world was never black and white – we simply lacked the technology to capture it in full color. Many have experimented with techniques to take black and white images, and colorize them. [Adrian Rosebrock] decided to put an AI on the job, with impressive results.

The method involves training a Convolutional Neural Network (CNN) on a large batch of photos, which have been converted to the Lab colorspace. In this colorspace, images are made up of 3 channels – lightness, a (red-green), and b (blue-yellow). This colorspace is used as it better corresponds to the nature of the human visual system than RGB. The model is then trained such that when given a lightness channel as an input, it can predict the likely a & b channels. These can then be recombined into a colorized image, and converted back to RGB for human consumption.

It’s a technique capable of doing a decent job on a wide variety of material. Things such as grass, countryside, and ocean are particularly well dealt with, however more complicated scenes can suffer from some aberration. Regardless, it’s a useful technique, and far less tedious than manual methods.

CNNs are doing other great things too, from naming tomatoes to helping out with home automation. Video after the break.

Continue reading “Colorizing Images With The Help Of AI”

Take Pictures Around A Corner

One of the core lessons any physics student will come to realize is that the more you know about physics, the less intuitive it seems. Take the nature of light, for example. Is it a wave? A particle? Both? Neither? Whatever the answer to the question, scientists are at least able to exploit some of its characteristics, like its ability to bend and bounce off of obstacles. This camera, for example, is able to image a room without a direct light-of-sight as a result.

The process works by pointing a camera through an opening in the room and then strobing a laser at the exposed wall. The laser light bounces off of the wall, into the room, off of the objects on the hidden side of the room, and then back to the camera. This concept isn’t new, but the interesting thing that this group has done is lift the curtain on the image processing underpinnings. Before, the process required a research team and often the backing of the university, but this project shows off the technique using just a few lines of code.

This project’s page documents everything extensively, including all of the algorithms used for reconstructing an image of the room. And by the way, it’s not a simple 2D image, but a 3D model that the camera can capture. So there should be some good information for anyone working in the 3D modeling world as well.

Thanks to [Chris] for the tip!