Using RealSense Cameras With OS X and Linux

The original Microsoft Kinect was a revolution in computer vision. For less than one hundred dollars, the Kinect gave everyone a webcam with a depth sensor. If you’re doing anything with robots, 3D scanning, or anything else where a computer needs to know where it is in 3D space, it’s awesome. These depth-mapping cameras have improved over the years, with the latest and most capable hardware being Intel’s RealSense 3D camera.

Despite being a very capable depth camera, support for Linux and OS X doesn’t exist. Researchers, roboticists and IoT developers are slightly miffed about this, and it seems like Intel doesn’t care about people using their hardware on platforms that aren’t Windows.

Now, finally, that’s changed. A few developers have taken it upon themselves to build a cross-platform library for the F200, SR300, and R200 Intel RealSense depth cameras.

The librealsense library features proper RealSense camera support for Linux, OS X, and Windows and provides all the functionality of the official Intel SDK. This functionality includes native depth, color, and infrared streams, synthetic streams for rectified images, calibration information, and the most interesting feature: multi-camera capture.

The hardware required to use the RealSense camera is somewhat lightweight – any recent laptop should be able to capture depth images with a RealSense camera. The camera itself requires USB 3, though, so you won’t be building a 3D scanner with a RealSense camera and a Raspberry Pi quite yet. Still, it’s the latest advancement for giving robots 3D vision and building cheap, portable 3D scanners.

Teardown of Intel RealSense Gesture Camera Reveals Projector Details

[Chipworks] has just released the details on their latest teardown on an Intel RealSense gesture camera that was built into a Lenovo laptop. Teardowns are always interesting (and we suspect that [Chipworks] can’t eat breakfast without tearing it down), but this one reveals some fascinating details on how you build a projector into a module that fits into a laptop bezel. While most structured light projectors use a single, static pattern projected through a mask, this one uses a real projection mechanism to send different patterns that help the device detect gestures faster, all in a mechanism that is thinner than a poker chip.

mechanism1It does this by using an impressive miniaturized projector made of three tiny components: an IR laser, a line lens and a resonant micromirror. The line lens takes the point of light from the IR laser and turns it into a flat horizontal line. This is then bounced off the resonant micromirror, which is twisted by an electrical signal. This micromirror is moved by a torsional drive system, where an electrostatic signal twists the mirror, which is manufactured in a single piece. The system is described in more detail in this PDF of a presentation by the makers, ST Micro. This combination of lens and rapidly moving mirrors creates a pattern of light that is projected, and the reflection is detected by the IR camera on the other side of the module, which is used to create a 3D model that can be used to detect gestures, faces, and other objects. It’s a neat insight into how you can miniaturize things by approaching them in a different way.