Focus Your Ears with The Visual Microphone

VideoMicrophone

A Group of MIT, Microsoft, and Adobe researchers have managed to reproduce sound using video alone. The sounds we make bounce off every object in the room, causing microscopic vibrations.  The Visual Microphone utilizes a high-speed video camera and some clever signal processing to extract an audio signal from these vibrations. Using video of everyday objects such as snack bags, plants, Styrofoam cups, and water, the team was able to reproduce tones, music and speech. Capturing audio from light isn’t exactly new. Laser microphones have been around for years. The difference here is the fact that the visual microphone is a completely passive device. No laser or special illumination is required.

The secret is in the signal processing, which the team explains in their SIGGRAPH paper (pdf link). They used a complex steerable pyramid along with wavelet filters to obtain local pixel motion values. These local values are averaged into a global motion value. From this global motion value the team is able to measure movement down to 1/1000 of a pixel. Plenty of resolution to decode audio data.

Most of the research is performed with high-speed video cameras, which are well outside the budget of the average hacker. Don’t despair though, the team did prove out that the same magic can be performed with consumer cameras, albeit with lower quality results. The team took advantage of the rolling shutter found in most of today’s CMOS imager based consumer cameras. Rolling shutter CMOS sensors capture images one row at a time. Each row can be processed in a similar fashion to the frames of the high-speed camera. There are some inter-frame gaps when the camera isn’t recording anything though. Even with the reduced resolution, it’s easy to pick out “Mary had a little lamb” in the video below.

We’re blown away by this research, and we’re sure certain organizations will be looking into it for their own use. Don’t pull out your tin foil hats yet though. Foil containers proved to be one of the best sound reflectors.

[Read more...]

3-Sweep: Turning 2D images into 3D models

2d to 3d

As 3D printing continues to grow, people are developing more and more ways to get 3D models. From the hardware based scanners like the Microsoft Kinect to software based like 123D Catch there are a lot of ways to create a 3D model from a series of images. But what if you could make a 3D model out of a single image? Sound crazy? Maybe not. A team of researchers have created 3-Sweep, an interactive technique for turning objects in 2D images into 3D models that can be manipulated.

To be clear, the recognition of 3D components within a single image is a bit out of reach for computer algorithms alone. But by combining the cognitive abilities of a person with the computational accuracy of a computer they have been able to create a very simple tool for extracting 3D models. This is done by outlining the shape similar to how one might model in a CAD package — once the outline is complete, the algorithm takes over and creates a model.

The software was debuted at Siggraph Asia 2013 and has caused quite a stir on the internet. Watch the fascinating video that demonstrates the software process after the break!

[Read more...]

24-hour hackathon project adds object-based automation to hackerspace

hackerspace-automation

[Jeremy Blum], [Jason Wright], and [Sam Sinensky] combined forces for twenty-four hours to automate how the entertainment and lighting works at their hackerspace. They commandeered the whiteboard and used an already present webcam as part of their project. You can see the black tokens which can be moved around the blue tape outline to actuate the controls.

MATLAB is fed an image from the webcam which monitors the space. Frames are received once every second and parsed for changes in the tokens. There are small black squares which either skip to the next track of music or affect pause/play. Simply move them off of their designated spot and the image processing does the rest. This goes for the volume slider as well. We think the huge token for the lights is to ensure that the camera can sense a change in a darkened room.

If image processing isn’t your thing you can still control your audio entertainment with a frickin’ laser.

[Read more...]

Massively parallel CPU processes 256 shades of gray

256

The 1980s were a heyday for strange computer architectures; instead of the von Neumann architecture you’d find in one of today’s desktop computers or the Harvard architecture of a microcontroller, a lot of companies experimented with strange parallel designs. While not used much today, at the time these were some of the most powerful computers of their day and were used as the main research tools of the AI renaissance of the 1980s.

Over at the Norwegian University of Science and Technology a huge group of students (13 members!) designed a modern take on the massively parallel computer. It’s called 256 Shades of Gray, and it processes 320×240 pixel 8-bit grayscale graphics like no microcontroller could.

The idea for the project was to create an array-based parallel image processor with an architecture similar to the Goodyear MPP formerly used by NASA or the Connection Machine found in the control room of Jurassic Park. Unlike these earlier computers, the team implemented their array processor in an FPGA, giving rise to their Lena processor this processor is in turn controlled by a 32-bit AVR microcontroller with a custom-build VGA output.

The entire machine can process 10 frames per second of 320×240 resolution grayscale video. There’s a presentation video available (in Norwegian), but the highlight might be their demo of The Game of Life rendered in real-time on their computer. An awesome build, and a very cool experience for all the members of the class.

Quantifying Cloudiness with OpenCV

What Can I see From the Shard?

The Shard is the tallest building in Western Europe, and has a great view of London.  The condos in the building are very expensive, and a tourist ride to the top of the building costs £24.95.

Since the value of the view is so high, [Willem] wanted to quantify the quality of the view at any given time. His solution is the Shard Rain Cam. This device combines a Logitech webcam with a Raspberry Pi to capture a time-lapse set of images. These images are fed to a Python script using OpenCV which quantifies the cloudiness.

[Willem] also had to build a weatherproof enclosure with a transparent window for the camera and RPi. ‘Clingfilm’, which is British for saran wrap, and mineral oil is used to improve the waterproofing of an IP54 rated enclosure.

The resulting data is displayed on www.whatcaniseefromtheshard.com, which provides an indication of whether or not the view is worth £24.95. All of the Python code is available, and is a good starting point for learning about image processing with OpenCV.

DIY book scanner processes 600 pages/hour

Like any learned individual, [Justin] has a whole mess of books. Not being tied to the dead-tree format of bound paper, and with e-readers popping up everywhere, he decided to build a low-cost book scanner so an entire library can be carried in a his pocket. If that’s not enough, there’s also a complementary book image processor to assemble the individual pictures into a paginated tome.

The build is pretty simple – just a little bit of black craft board for the camera mount and adjustable book cradle. [Justin] ended up using the CHDK software for the Cannon PowerShot camera to hack in a remote trigger. The scanner can manage to photograph 600 pages an hour, although that would massively increase if he ever moves up to a 2-camera setup.

[Read more...]

LEGO Mindstorms Sudoku Solver

Swedish hacker [Hans Andersson] is no stranger to puzzle-solving robots. His prior work, A Rubik’s cube-solving robot called Tilted Twister, made waves through the internet last year. [Hans’] latest project only has to work in two dimensions, but is no less clever. This new robot, built around the LEGO Mindstorms NXT system, “reads” a printed sudoku page, solves the puzzle, then fills out the solution right on the same page, confidently and in ink. It’s a well-rounded project that brings together an unexpected image scanner, image processing algorithms, and precise motor control, all using standard NXT elements.

The building instructions have not yet been posted, but if the video above and the directions for his prior ’bot are any indication, then we’re in for a treat; he simply has a knack for explaining things concisely and with visual clarity. The source code and the detailed PDF diagrams for Tilted Twister are as gorgeous as his new robot’s penmanship.

[thanks Eric]

Follow

Get every new post delivered to your Inbox.

Join 98,470 other followers