3D Scanning Entire Rooms With A Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

Continue reading “3D Scanning Entire Rooms With A Kinect”

3D Popup Cards From 3D Photos

The world of 3D printing is growing rapidly. Some might say it’s growing layer by layer. But there was one aspect that [Ken] wanted to improve upon, and that was in the area of 3D photos. Specifically, printing a 3D pop-up-style photograph that collapses to save space so you can easily carry it around.

It’s been possible to take 3D scans of objects and render a 3D print for a while now, but [Ken] wanted something a little more portable. His 3D pop-up photographs are similar to pop-up books for children, in that when the page is unfolded a three-dimensional shape distances itself from the background.

The process works by taking a normal 3D photo. With the help of some software, sets of points that are equidistant from the camera are grouped into layers. From there, they can be printed in the old 2-dimensional fashion and then connected to achieve the 3D effect. Using a Kinect or similar device would allow for any number of layers and ways of using this method. So we’re throwing down the gauntlet — we want to see an arms-race of pop-up photographs. Who will be the one to have the most layers, and who will find a photograph subject that makes the most sense in this medium? Remember how cool those vector-cut topographical maps were? There must be a similarly impressive application for this!

[Ken] isn’t a stranger around these parts. He was previously featured for his unique weather display and his semi-real-life Mario Kart, so be sure to check those out as well.

Head Gesture Tracking Helps Limited Mobility Students

There is a lot of helpful technology for people with mobility issues. Even something that can help people do something most of us wouldn’t think twice about, like turn on a lamp or control a computer, can make a world of difference to someone who can’t move around as easily. Luckily, [Matt] has been working on using webcams and depth cameras to allow someone to do just that.

[Matt] found that using webcams instead of depth cameras (like the Kinect) tends to be less obtrusive but are limited in their ability to distinguish individual users and, of course, don’t have the same 3D capability. With either technology, though, the software implementation is similar. The camera can detect head motion and control software accordingly by emulating keystrokes. The depth cameras are a little more user-friendly, though, and allow users to move in whichever way feels comfortable for them.

This isn’t the first time something like a Kinect has been used to track motion, but for [Matt] and his work at Beaumont College it has been an important area of ongoing research. It’s especially helpful since the campus has many things on network switches (like lamps) so this software can be used to help people interact much more easily with the physical world. This project could be very useful to anyone curious about tracking motion, even if they’re not using it for mobility reasonsContinue reading “Head Gesture Tracking Helps Limited Mobility Students”

Augmented Reality Sandbox Using A Kinect

Want to make all your 5 year old son’s friends jealous? What if he told them he could make REAL volcanoes in his sandbox? Will this be the future of sandboxes, digitally enhanced with augmented reality?

It’s not actually that hard to set up! The system consists of a good computer running Linux, a Kinect, a projector, a sandbox, and sand. And that’s it! The University of California (UC Davis) has setup a few of these systems now to teach children about geography, which is a really cool demonstration of both 3D scanning and projection mapping. As you can see in the animated gif above, the Kinect can track the topography of the sand, and then project its “reality” onto it. In this case, a mini volcano.

Continue reading “Augmented Reality Sandbox Using A Kinect”

Virtual Physical Rehab With Kinect

Web sites have figured out that “gamifying” things increases participation. For example, you’ve probably boosted your postings on a forum just to get a senior contributor badge (that isn’t even really a badge, but a picture of one). Now [Yash Soni] has brought the same idea to physical therapy.

[Yash]’s father had to go through boring physical therapy to treat a slipped disk, and it prompted him into developing KinectoTherapy which aims to make therapy more like a video game. They claim it can be used to help many types of patients ranging from stroke victims to those with cerebral palsy.

Patients can see their onscreen avatar duplicate their motions and can provide audio and visual feedback when the player makes a move correctly or incorrectly. Statistical data is also available to the patient’s health care professionals.

Continue reading “Virtual Physical Rehab With Kinect”

Portabilizing The Kinect

Way back when the Kinect was first released, there was a realization that this device would be the future of everything 3D. It was augmented reality, it was a new computer interface, it was a cool sensor for robotics applications, and it was a 3D scanner. When the first open source driver for the Kinect was released, we were assured that this is how we would get 3D data from real objects into a computer.

Since then, not much happened. We’re not using the Kinect for a UI, potato gamers were horrified they would be forced to buy the Kinect 2 with the new Xbox, and you’d be hard pressed to find a Kinect in a robot. 3D scanning is the only field where the Kinect hasn’t been over hyped, and even there it’s still a relatively complex setup.

This doesn’t mean a Kinect 3D scanner isn’t an object of desire for some people, or that it’s impossible to build a portabilzed version. [Mario]’s girlfriend works as an archaeologist, and having a tool to scan objects and places in 3D would be great for her. Because of this, [Mario] is building a handheld 3D scanner with a Raspberry Pi 2 and a Kinect.

This isn’t the first time we’ve seen a portablized Kinect. Way back in 2012, the Kinect was made handheld with the help of a Gumstix board. Since then, a million tiny ARM single board computers have popped up, and battery packs are readily available. It was only a matter of time until someone stepped up to the plate, and [Mario] was the guy.

The problem facing [Mario] isn’t hardware. Anyone can pick up a Kinect at Gamestop, the Raspberry Pi 2 should be more than capable of reading the depth sensor on the Kinect, and these parts can be tied together with 3D printed parts. The real problem is the software, and so far [Mario] has Libfreenect compiling without a problem on the Pi2. The project still requires a lot of additional libraries including some OpenCV stuff, but so far [Mario] has everything working.

You can check out his video of the proof of concept below.

Continue reading “Portabilizing The Kinect”

Fur Mirror

Interactive Fur Mirror Follows Your Every Move

We think artist [Daniel Rozin] spent a bit too much time wondering if he could make an interactive fur mirror, without wondering if he should. The result is… strange — to say the least.

It’s called the PomPom Mirror, and its one of many interactive installations in the Descent With Modification at Bitforms — there’s even a super cute flock of penguins which spin around to create the same effect.

The mirror is 4 by 4 feet and 18″ deep. It has 928 faux fur pom poms which are controlled by 464 motors, each effectively with an “on” and “off” state. A Microsoft Kinect tracks movement and creates a black and white binary image of what it sees. The artist also programmed in a few animation sequences which make the mirror come alive — like some weird furry alien / plant thing…

Continue reading “Interactive Fur Mirror Follows Your Every Move”