Interactive Dynamic Video

If a picture is worth a thousand words, a video must be worth millions. However, computers still aren’t very good at analyzing video. Machine vision software like OpenCV can do certain tasks like facial recognition quite well. But current software isn’t good at determining the physical nature of the objects being filmed. [Abe Davis, Justin G. Chen, and Fredo Durand] are members of the MIT Computer Science and Artificial Intelligence Laboratory. They’re working toward a method of determining the structure of an object based upon the object’s motion in a video.

The technique relies on vibrations which can be captured by a typical 30 or 60 Frames Per Second (fps) camera. Here’s how it works: A locked down camera is used to image an object. The object is moved due to wind, or someone banging on it, or  any other mechanical means. This movement is captured on video. The team’s software then analyzes the video to see exactly where the object moved, and how much it moved. Complex objects can have many vibration modes. The wire frame figure used in the video is a great example. The hands of the figure will vibrate more than the figure’s feet. The software uses this information to construct a rudimentary model of the object being filmed. It then allows the user to interact with the object by clicking and dragging with a mouse. Dragging the hands will produce more movement than dragging the feet.

The results aren’t perfect – they remind us of computer animated objects from just a few years ago. However, this is very promising. These aren’t textured wire frames created in 3D modeling software. The models and skeletons were created automatically using software analysis. The team’s research paper (PDF link) contains all the details of their research. Check it out, and check out the video after the break.

12 thoughts on “Interactive Dynamic Video

  1. Why does this come across as “Video analysis solved…. we say screw it and use a form of sonar” ? :-D

    Though good point that understanding an object comes through interacting with it, and babies begin to do it by stuffing things in their mouths… maybe need machines with mouths :-D

  2. Now this, I could actually see this having a lot of applications in making animations in video games more realistic. ESPECIALLY in making physics in games and bringing it up to a reasonable level. I could completely see taking a leaf off a plant, doing this to it and then making a 3d model of a plant with the same weights and whatnot to create more realistic animations.

  3. They should team up with the guys that were using FPGA based systems to extract real-time geometry from stereo camera feeds. I wonder if they can use a third central camera runing at low resolution and 120 fps to extract the dynamics that can then be used to distort the high resolution 3D video?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s