Putting yourself inside a display

dome

Here’s an interesting build that combines light, sound, and gesture recognition to make a 360 degree environment of light and sound. It’s called The Bit Dome, and while the pictures and video are very cool, we’re sure it’s more impressive in real life.

The dome is constructed of over a hundred triangles made of foam insulation sheet, resulting in a structure that is 10 feet in diameter and seven and a half feet tall. Every corner of these panels has an RGB LED driven by a Rainboduino, which is in turn controlled by a computer hooked up to a Kinect.

The process of interacting with the dome begins by stepping inside and activating the calibration process. By having the user point their arms at different points inside the dome, the computer can reliably tell where the user is pointing, and respond when the user cycles through the dome’s functions.

There are bunch of things this dome can do, such as allowing the user to conduct an audio-visual light show, run a meditation program, or even play Snake and Pac-Man. You can check out these games and more in the videos after the break.

[Read more...]

Robot stroller lets baby steer without mowing down other toddlers

We’ve seen strollers and car seats that have a steering wheel for the baby to play with (like in the opening of The Simpsons). But what we hadn’t seen is a stroller that allows baby to actually steer. You might think that a putting a motorized vehicle in the hands of someone so young is an accident waiting to happen. But [Xandon Frogget] thought of that and used familiar hardware to add some safety features.

The stroller seen above is a tricycle setup, making it quite easy to add motors to the two rear wheels. These are controlled by a tablet which you can see nestled on the canopy of the stroller (look for the light reflected on the glass). This interfaces with two Kinect sensors, one pointing forward and the other pointing back. They continually scan the environment, looking for obstacles in the stroller’s path. You can see [Xandon's] little girl holding a Wii Wheel, which connects with the tablet to facilitate steering. A test run at the playground is embedded after the break.

[Read more...]

A portable, WiFi-enabled Kinect

The builds using a Kinect as a 3D scanner just keep getting better and better. A team of researchers from the University of Bristol have portablized the Kinect by adding a battery, single board Linux computer, and a WiFi adapter. With their Mobile Kinect project, it’s now a snap to automatically map an environment without lugging a laptop around, or just giving your next mobile robot an awesome vision system.

By making the Kinect portable, [Mike] et al made the Microsoft’s 3D imaging device much more capable than its present task of computing the volumetric space of the inside of a cabinet. The Reconstructme project allows the Kinect to be used as a hand-held 3D scanner and Kintinuous can be used to create a 3D model of entire houses, buildings, or caves.

There’s a lot that can be done with a portabalized, WiFi’d Kinect, and hopefully a few builds replicating the team’s work (except for replacing the Gumstix board with a Raspi) will be showing up on HaD shortly.

Video after the break.

[Read more...]

Help computer vision researchers, get a 3d model of your living room

Robots can easily make their way across a factory floor; with painted lines on the floor, a factory makes for an ideal environment for a robot to navigate. A much more difficult test of computer vision lies in your living room. Finding a way around a coffee table and not knocking over a lamp present a huge challenge for any autonomous robot. Researchers at the Royal Institute of Technology in Sweden are working on this problem, but they need your help.

[Alper Aydemir], [Rasmus Göransson] and Prof. [Patric Jensfelt] at the Centre for Autonomous Systems in Stockholm created Kinect@Home. The idea is simple: by modeling hundreds of living rooms in 3D, the computer vision and robotics researchers will have a fantastic library to train their algorithms.

To help out the Kinect@Home team, all that is needed is a Kinect, just like the one lying disused in your cupboard. After signing up on the Kinect@Home site, you’re able to create a 3D model of your living room, den, or office right in your browser. This 3D model is then added to the Kinect@Home library for CV researchers around the world.

Control your house by moving your arms like you’re directing traffic

This home automation project lets you flap your arms to turn things on and off. [Toon] and [Jiang] have been working on the concept as part of their Master’s thesis at University. It uses a 3D camera with some custom software to pick up your gestures. What we really like is the laser pointer which provides feedback. You can see a red dot on the wall which followers where ever he points. Each controllable device has a special area to which the dot will snap when the user is pointing close to it. By raising his other arm the selected object can be turned on or off.

Take a look at the two videos after the break to get a good overview of the concept. We’d love to see some type of laser projector used instead of just a single dot. This way you could have a pop-up menu system. Imagine getting a virtual remote control on the wall for skipping to the next audio track, adjusting the volume, or changing the TV channel.

[Read more...]

Building a better Kinect with a… pager motor?

Fresh from Microsoft Research is an ingenious way to reduce interference and decrease the error in a Kinect. Bonus: the technique only requires a motor with an offset weight, or just an oversized version of the vibration motor found in a pager.

Being the first of its kind of commodity 3D depth sensors, the tracking on a Kinect really isn’t that good. In every Kinect demo we’ve ever seen, there are always errors in the 3D tracking or missing data in the point cloud. The Shake ‘n’ Sense, as Microsoft Research calls it, does away with these problems simply by vibrating the IR projector and camera with a single motor.

In addition to getting high quality point clouds from a Kinect, this technique also allows for multiple Kinects to be used in the same room. In the video (and title pic for this post), you can see a guy walking around a room filled with beach balls in 3D, captured from an array of four Kinects.

This opens up the doors to a whole lot of builds that were impossible with the current iteration of the Kinect, but we’re thinking this is far too easy and too clever not to be though of before. We’d love to see some independent verification of this technique, so if you’ve got a Kinect project sitting around, strap a motor onto it, make a video and send it in.

[Read more...]

Robot trash can catches anything you throw near it

This guy is about to toss the blue ball half way between the book shelf and the waste basket. By the time it gets there the waste basket will have moved into position to catch the ball perfectly. It’ll do the same for just about anything you throw.

We’re unable to read the captions but it looks like this may have been made as part of a commercial which is shown in the first few seconds of the video after the break. From there we see the development of a locomotive mechanism which will fit into the bottom of the bin. It start as a single swivel wheel, but gets more complicated quite quickly. Once the low-profile three-wheeler is milled and assembled it’s time to start writing the code to translate input from a Kinect 3D camera and extrapolate the position for catching the trash. The final result seems to do this perfectly.

[Read more...]

Follow

Get every new post delivered to your Inbox.

Join 93,735 other followers