We’ve seen strollers and car seats that have a steering wheel for the baby to play with (like in the opening of The Simpsons). But what we hadn’t seen is a stroller that allows baby to actually steer. You might think that a putting a motorized vehicle in the hands of someone so young is an accident waiting to happen. But [Xandon Frogget] thought of that and used familiar hardware to add some safety features.
The stroller seen above is a tricycle setup, making it quite easy to add motors to the two rear wheels. These are controlled by a tablet which you can see nestled on the canopy of the stroller (look for the light reflected on the glass). This interfaces with two Kinect sensors, one pointing forward and the other pointing back. They continually scan the environment, looking for obstacles in the stroller’s path. You can see [Xandon’s] little girl holding a Wii Wheel, which connects with the tablet to facilitate steering. A test run at the playground is embedded after the break.
Continue reading “Robot stroller lets baby steer without mowing down other toddlers”
The builds using a Kinect as a 3D scanner just keep getting better and better. A team of researchers from the University of Bristol have portablized the Kinect by adding a battery, single board Linux computer, and a WiFi adapter. With their Mobile Kinect project, it’s now a snap to automatically map an environment without lugging a laptop around, or just giving your next mobile robot an awesome vision system.
By making the Kinect portable, [Mike] et al made the Microsoft’s 3D imaging device much more capable than its present task of computing the volumetric space of the inside of a cabinet. The Reconstructme project allows the Kinect to be used as a hand-held 3D scanner and Kintinuous can be used to create a 3D model of entire houses, buildings, or caves.
There’s a lot that can be done with a portabalized, WiFi’d Kinect, and hopefully a few builds replicating the team’s work (except for replacing the Gumstix board with a Raspi) will be showing up on HaD shortly.
Video after the break.
Continue reading “A portable, WiFi-enabled Kinect”
Robots can easily make their way across a factory floor; with painted lines on the floor, a factory makes for an ideal environment for a robot to navigate. A much more difficult test of computer vision lies in your living room. Finding a way around a coffee table and not knocking over a lamp present a huge challenge for any autonomous robot. Researchers at the Royal Institute of Technology in Sweden are working on this problem, but they need your help.
[Alper Aydemir], [Rasmus Göransson] and Prof. [Patric Jensfelt] at the Centre for Autonomous Systems in Stockholm created Kinect@Home. The idea is simple: by modeling hundreds of living rooms in 3D, the computer vision and robotics researchers will have a fantastic library to train their algorithms.
To help out the Kinect@Home team, all that is needed is a Kinect, just like the one lying disused in your cupboard. After signing up on the Kinect@Home site, you’re able to create a 3D model of your living room, den, or office right in your browser. This 3D model is then added to the Kinect@Home library for CV researchers around the world.
This home automation project lets you flap your arms to turn things on and off. [Toon] and [Jiang] have been working on the concept as part of their Master’s thesis at University. It uses a 3D camera with some custom software to pick up your gestures. What we really like is the laser pointer which provides feedback. You can see a red dot on the wall which followers where ever he points. Each controllable device has a special area to which the dot will snap when the user is pointing close to it. By raising his other arm the selected object can be turned on or off.
Take a look at the two videos after the break to get a good overview of the concept. We’d love to see some type of laser projector used instead of just a single dot. This way you could have a pop-up menu system. Imagine getting a virtual remote control on the wall for skipping to the next audio track, adjusting the volume, or changing the TV channel.
Continue reading “Control your house by moving your arms like you’re directing traffic”
Fresh from Microsoft Research is an ingenious way to reduce interference and decrease the error in a Kinect. Bonus: the technique only requires a motor with an offset weight, or just an oversized version of the vibration motor found in a pager.
Being the first of its kind of commodity 3D depth sensors, the tracking on a Kinect really isn’t that good. In every Kinect demo we’ve ever seen, there are always errors in the 3D tracking or missing data in the point cloud. The Shake ‘n’ Sense, as Microsoft Research calls it, does away with these problems simply by vibrating the IR projector and camera with a single motor.
In addition to getting high quality point clouds from a Kinect, this technique also allows for multiple Kinects to be used in the same room. In the video (and title pic for this post), you can see a guy walking around a room filled with beach balls in 3D, captured from an array of four Kinects.
This opens up the doors to a whole lot of builds that were impossible with the current iteration of the Kinect, but we’re thinking this is far too easy and too clever not to be though of before. We’d love to see some independent verification of this technique, so if you’ve got a Kinect project sitting around, strap a motor onto it, make a video and send it in.
Continue reading “Building a better Kinect with a… pager motor?”
This guy is about to toss the blue ball half way between the book shelf and the waste basket. By the time it gets there the waste basket will have moved into position to catch the ball perfectly. It’ll do the same for just about anything you throw.
We’re unable to read the captions but it looks like this may have been made as part of a commercial which is shown in the first few seconds of the video after the break. From there we see the development of a locomotive mechanism which will fit into the bottom of the bin. It start as a single swivel wheel, but gets more complicated quite quickly. Once the low-profile three-wheeler is milled and assembled it’s time to start writing the code to translate input from a Kinect 3D camera and extrapolate the position for catching the trash. The final result seems to do this perfectly.
Continue reading “Robot trash can catches anything you throw near it”
The Kinect is awesome, but if you want to do anything at a higher resolution detecting a person’s limbs, you’re out of luck. [Chris McCormick] over at CogniMem has a great solution to this problem: use a neural network on a chip to recognize fingers with hardware already connected to your XBox.
The build uses the very cool CogniMem CM1K neural network on a chip trained to tell the difference between counting from one to four on a single hand, as well as an ‘a-okay’ sign, Vulcan greeting (shown above), and rocking out at a [Dio] concert. As [Chris] shows us in the video, these finger gestures can be used to draw on a screen and move objects using only an open palm and closed fist; not too far off from the Minority Report and Iron Man UIs.
If you’d like to duplicate this build, we found the CM1K neural network chip available here for a bit more than we’d be willing to pay. A neural net on a chip is an exceedingly cool device, but it looks like this build will have to wait for the Kinect 2 to make it down to the consumer and hobbyist arena.
You can check out the videos of Kinect finger recognition in action after the break with World of Goo and Google Maps.
Continue reading “Finger recognition on the Kinect”