Turning Video Game Sprites Into 3D Objects

Anyone who has played Minecraftfor a good amount of time should have a good grasp on making 3D objects by placing voxels block by block. A giant voxel art dragon behind your base is cool, but what about the math behind your block based artwork? [mikolalysenko] put together a tutorial for making 3D objects out of video game sprites and covers a lot of the math involved in turning pixels into voxels.

The process of modeling a 3D object from a series of 2D images is a very well-studied computer vision problem called multiview stereo reconstruction. This process has been used to build 3D models of random objects with devices such as the Stanford spherical gantry. Unfortunately the math for this algorithm is a mess, but there is another way: using photo hulls (PDF warning) to find the largest possible object from a series of images showing the top, bottom, left, right, front, and back views.

[mikolaly] put together an algorithm to produce 3D images from a series of images and even went so far as to build a web-based shape carving editor. With this web app, it’s possible to make 3D objects simply by inputting a bunch of colored pixels onto six 2D grids.

Once the models were complete, [mikolaly] sent some of the 3D models off to Shapeways for 3D printing. He’s completed Meat boy, Mario, and Link 3D sprites, all available for sale.

Now the only thing left to do is build a script to turn these objects into Minecraft object schematics.

Help Computer Vision Researchers, Get A 3d Model Of Your Living Room

Robots can easily make their way across a factory floor; with painted lines on the floor, a factory makes for an ideal environment for a robot to navigate. A much more difficult test of computer vision lies in your living room. Finding a way around a coffee table and not knocking over a lamp present a huge challenge for any autonomous robot. Researchers at the Royal Institute of Technology in Sweden are working on this problem, but they need your help.

[Alper Aydemir], [Rasmus Göransson] and Prof. [Patric Jensfelt] at the Centre for Autonomous Systems in Stockholm created Kinect@Home. The idea is simple: by modeling hundreds of living rooms in 3D, the computer vision and robotics researchers will have a fantastic library to train their algorithms.

To help out the Kinect@Home team, all that is needed is a Kinect, just like the one lying disused in your cupboard. After signing up on the Kinect@Home site, you’re able to create a 3D model of your living room, den, or office right in your browser. This 3D model is then added to the Kinect@Home library for CV researchers around the world.

Adding New Features And Controlling A Kinect From A Couch

Upon the release of the Kinect, Microsoft showed off its golden child as the beginnings of a revolution in user interface technology. The skeleton and motion detection promised a futuristic, hand-waving “Minority Report-style” interface where your entire body controls a computer. The expectations haven’t exactly lived up reality, but [Steve], along with his coworkers at Amulet Devices have vastly improved the Kinect’s skeleton recognition so people can use a Kinect sitting down.

One huge drawback for using the Kinect for a Minority Report UI in a home theater is the fact that the Microsoft Skeleton recognition doesn’t work well when sitting down. Instead of relying on the built-in skeleton recognition that comes with the Kinect, [Steve] rolled his own skeleton detection using Harr classifiers.

Detecting Harr-like features has been used in many applications of computer vision technology; it’s a great, not-very-computationally-intensive way to detect faces and body positions with a simple camera. Training is required for the software, and [Steve]’s app spent several days programming itself. The results were worth it, though: the Kinect now recognizes [Steve] waving his arm while he is lying down on the couch.

Not to outdo himself, [Steve] also threw in voice recognition to his Kinect home theater controller; a fitting  addition as his employer makes a voice recognition remote control. The recognition software seems to work very well, even with the wistful Scottish accent [Steve] has honed over a lifetime.

[Steve]’s employer is giving away their improved Kinect software that works for both the Xbox and Windows Kinects. If you’re ever going to do something with a Kinect that isn’t provided with the SDKs and APIs we covered earlier today, this will surely be an invaluable resource.

You can check out [Steve]’s demo of the new Kinect software after the break.

Continue reading “Adding New Features And Controlling A Kinect From A Couch”

Birdwatching Meets A Computer-Controlled Water Cannon, Awesomeness Ensues

squirrel turret

Sure, squirrels may bother the average home owner, but few have attempted as creative a way to control them as this automated water turret. Check out the video after the break to see how this was accomplished, but if you’d rather just see how the squirrels reacted to getting squirted, fast forward to around 16:00. According to [Kurt] he was sure this would be his solution, however, his conclusion was that “squirrels don’t care.”

As for the presentation, it’s more about how to use [OpenCV], or Open Source Computer Vision. It’s quite a powerful piece of software, especially considering that something like this would cost thousands of dollars in a normal market.  An Arduino is used to interface the computer’s outputs to the real world and control a squirt gun. If you’d rather not program something like this yourself, you could always simply use a garden hose as someone suggests just after the video. Continue reading “Birdwatching Meets A Computer-Controlled Water Cannon, Awesomeness Ensues”

Easy Camera Tracking With A Quadrocopter

[DJ Sures] has been pulling all-nighters lately to get his AR Drone Parrot build off the ground. Now that it’s up and flying around, he managed to get it to follow objects around the room using on board cameras.

For the build, [DJ Sures] used the AR Drone ‘flying video game’ quadrocopter. This toy has two on board cameras that can viewed over wifi. All that’s needed is some interesting software to make things fun. The camera tracking of EZ-Builder software was brought into the mix so the AR Drone can be controlled via object or speech recognition, wiimotes, tablets, or terminals.

[DJ Sures] has come up with some slightly terrifying awesome builds like a Bluetooth Teddy Ruxpin, realistic Wall-E, and an awesome Omnibot 2000 refurb. This is his first flying hack, and the first to fully exploit the camera tracking of the EZ-Builder software. Check out [Sures]’ copter following him around a room after the break.

Continue reading “Easy Camera Tracking With A Quadrocopter”

Modded Wall-E Becomes A Real Robot

[DJ Sures] got his hands on a plastic Wall-E toy and decided to build a robot that includes a camera, voice recognition, and object tracking. The result is adorable so we’re putting this video before the break:

[youtube=http://www.youtube.com/watch?v=OJiMUzJHYFk&w=470]

Continue reading “Modded Wall-E Becomes A Real Robot”