Using RealSense Cameras With OS X And Linux

The original Microsoft Kinect was a revolution in computer vision. For less than one hundred dollars, the Kinect gave everyone a webcam with a depth sensor. If you’re doing anything with robots, 3D scanning, or anything else where a computer needs to know where it is in 3D space, it’s awesome. These depth-mapping cameras have improved over the years, with the latest and most capable hardware being Intel’s RealSense 3D camera.

Despite being a very capable depth camera, support for Linux and OS X doesn’t exist. Researchers, roboticists and IoT developers are slightly miffed about this, and it seems like Intel doesn’t care about people using their hardware on platforms that aren’t Windows.

Now, finally, that’s changed. A few developers have taken it upon themselves to build a cross-platform library for the F200, SR300, and R200 Intel RealSense depth cameras.

The librealsense library features proper RealSense camera support for Linux, OS X, and Windows and provides all the functionality of the official Intel SDK. This functionality includes native depth, color, and infrared streams, synthetic streams for rectified images, calibration information, and the most interesting feature: multi-camera capture.

The hardware required to use the RealSense camera is somewhat lightweight – any recent laptop should be able to capture depth images with a RealSense camera. The camera itself requires USB 3, though, so you won’t be building a 3D scanner with a RealSense camera and a Raspberry Pi quite yet. Still, it’s the latest advancement for giving robots 3D vision and building cheap, portable 3D scanners.

Polarizing 3D Scanner Gives Amazing Results

What if you could take a cheap 3D sensor like a Kinect and increase its effectiveness by three orders of magnitude? The Kinect is great, of course, but it does have a limited resolution. To augment this, MIT researchers are using polarized measurements to deduce 3D forms.

The Fresnel equations describe how the shape of an object changes reflected light polarization, and the researchers use the received polarization to infer the shape. The polarizing sensor is nothing more than a DSLR camera and a polarizing filter, and scanning resolution is down to 300 microns.

The problem with the Fresnel equations is that there is an ambiguity so that a single measurement of polarization doesn’t uniquely identify the shape, and the novel work here is to use information from depth sensors like Kinect to select from the alternatives.

Continue reading “Polarizing 3D Scanner Gives Amazing Results”

3D Scanning Entire Rooms With A Kinect

Almost by definition, the coolest technology and bleeding-edge research is locked away in universities. While this is great for post-docs and their grant-writing abilities, it’s not the best system for people who want to use this technology. A few years ago, and many times since then, we’ve seen a bit of research that turned a Kinect into a 3D mapping camera for extremely large areas. This is the future of VR, but a proper distribution has been held up by licenses and a general IP rights rigamarole. Now, the source for this technology, Kintinuous and ElasticFusion, are available on Github, free for everyone to (non-commercially) use.

We’ve seen Kintinuous a few times before – first in 2012 where the possibilities for mapping large areas with a Kinect were shown off, then an improvement that mapped a 300 meter long path though a building. With the introduction of the Oculus Rift, inhabiting these virtual scanned spaces became even cooler. If there’s a future in virtual reality, we’re need a way to capture real life and make it digital. So far, this is the only software stack that does it on a large scale

If you’re thinking about using a Raspberry Pi to take Kintinuous on the road, you might want to look at the hardware requirements. A very fast Nvidia GPU and a fast CPU are required for good results. You also won’t be able to use it with robots running ROS; these bits of software simply don’t work together. Still, we now have the source for Kintinuous and ElasticFusion, and I’m sure more than a few people are interested in improving the code and bringing it to other systems.

You can check out a few videos of ElasticFusion and Kintinuous below.

Continue reading “3D Scanning Entire Rooms With A Kinect”

3D Popup Cards From 3D Photos

The world of 3D printing is growing rapidly. Some might say it’s growing layer by layer. But there was one aspect that [Ken] wanted to improve upon, and that was in the area of 3D photos. Specifically, printing a 3D pop-up-style photograph that collapses to save space so you can easily carry it around.

It’s been possible to take 3D scans of objects and render a 3D print for a while now, but [Ken] wanted something a little more portable. His 3D pop-up photographs are similar to pop-up books for children, in that when the page is unfolded a three-dimensional shape distances itself from the background.

The process works by taking a normal 3D photo. With the help of some software, sets of points that are equidistant from the camera are grouped into layers. From there, they can be printed in the old 2-dimensional fashion and then connected to achieve the 3D effect. Using a Kinect or similar device would allow for any number of layers and ways of using this method. So we’re throwing down the gauntlet — we want to see an arms-race of pop-up photographs. Who will be the one to have the most layers, and who will find a photograph subject that makes the most sense in this medium? Remember how cool those vector-cut topographical maps were? There must be a similarly impressive application for this!

[Ken] isn’t a stranger around these parts. He was previously featured for his unique weather display and his semi-real-life Mario Kart, so be sure to check those out as well.

Head Gesture Tracking Helps Limited Mobility Students

There is a lot of helpful technology for people with mobility issues. Even something that can help people do something most of us wouldn’t think twice about, like turn on a lamp or control a computer, can make a world of difference to someone who can’t move around as easily. Luckily, [Matt] has been working on using webcams and depth cameras to allow someone to do just that.

[Matt] found that using webcams instead of depth cameras (like the Kinect) tends to be less obtrusive but are limited in their ability to distinguish individual users and, of course, don’t have the same 3D capability. With either technology, though, the software implementation is similar. The camera can detect head motion and control software accordingly by emulating keystrokes. The depth cameras are a little more user-friendly, though, and allow users to move in whichever way feels comfortable for them.

This isn’t the first time something like a Kinect has been used to track motion, but for [Matt] and his work at Beaumont College it has been an important area of ongoing research. It’s especially helpful since the campus has many things on network switches (like lamps) so this software can be used to help people interact much more easily with the physical world. This project could be very useful to anyone curious about tracking motion, even if they’re not using it for mobility reasonsContinue reading “Head Gesture Tracking Helps Limited Mobility Students”

Augmented Reality Sandbox Using A Kinect

Want to make all your 5 year old son’s friends jealous? What if he told them he could make REAL volcanoes in his sandbox? Will this be the future of sandboxes, digitally enhanced with augmented reality?

It’s not actually that hard to set up! The system consists of a good computer running Linux, a Kinect, a projector, a sandbox, and sand. And that’s it! The University of California (UC Davis) has setup a few of these systems now to teach children about geography, which is a really cool demonstration of both 3D scanning and projection mapping. As you can see in the animated gif above, the Kinect can track the topography of the sand, and then project its “reality” onto it. In this case, a mini volcano.

Continue reading “Augmented Reality Sandbox Using A Kinect”

Virtual Physical Rehab With Kinect

Web sites have figured out that “gamifying” things increases participation. For example, you’ve probably boosted your postings on a forum just to get a senior contributor badge (that isn’t even really a badge, but a picture of one). Now [Yash Soni] has brought the same idea to physical therapy.

[Yash]’s father had to go through boring physical therapy to treat a slipped disk, and it prompted him into developing KinectoTherapy which aims to make therapy more like a video game. They claim it can be used to help many types of patients ranging from stroke victims to those with cerebral palsy.

Patients can see their onscreen avatar duplicate their motions and can provide audio and visual feedback when the player makes a move correctly or incorrectly. Statistical data is also available to the patient’s health care professionals.

Continue reading “Virtual Physical Rehab With Kinect”