New Part Day: Mapping With RealSense Cameras For $200

Robot cars, DIY or otherwise, are hot right now. To do this right, you’re going to need cameras, LIDAR, or some other way of sensing the the world. Intel is again getting into the fray with a RealSense tracking camera for simultaneous localization and mapping for robotics, drone, and augmented reality needs.

The tech specs for the Intel RealSense T265 are impressive for small robotics uses. It includes 6DoF tracking gathered by two cameras, each with a 170° FoV. Connection to a computer is through USB 2.0 or 3.0. If you want to get an idea of how seriously Intel is taking the ‘robotics, and other power- and weight-limited platforms’ market, here’s a sample of what is on the one-page spec sheet: the T265 only uses 1.5 Watts, weighs 55 grams, and is 108 x 25 x 13 mm. There are also two M3 taps spaced 50mm apart on the back, which is an astonishing spec to publish on the product landing page. Simply the fact that the location and dimensions of the mounting holes is so prominent gives you an idea of how seriously Intel is taking robotics and prototyping applications.

This new SLAM camera complements Intel’s other tracking camera offerings, including those we’ve seen at Maker Faires past. It’s a competitor to the new crop of solid state LIDAR modules we’ve seen pop up recently. It’s not a Kinect, but we’re years past using a first-gen Kinect for robotics applications. Now, everything is custom chips and SLAM processing, and the RealSense T265 is the smallest platform to do that now.

Hackaday Prize Entry: HaptiVision Creates A Net Of Vibration Motors

HaptiVision is a haptic feedback system for the blind that builds on a wide array of vibration belts and haptic vests. It’s a smart concept, giving the wearer a warning when an obstruction comes into sensor view.

The earliest research into haptic feedback wearables used ultrasonic sensors, and more recent developments used a Kinect. The project team for HaptiVision chose the Intel RealSense camera because of its svelte form factor. Part of the goal was to make the HaptiVision as discreet as possible, so fitting the whole rig under a shirt was part of the plan.

In addition to a RealSense camera, the team used an Intel Up board for the brains, mostly because it natively controlled the RealSense camera. It takes a 640×480 IR snapshot and selectively triggers the 128 vibration motors to tell you what’s close. The motors are controlled by 8 PCA9685-based PWM expander boards.

The project is based on David Antón Sánchez’s OpenVNAVI project, which also featured a 128-motor array. HaptiVision aims to create an easy to replicate haptic system. Everything is Open Source, and all of the wiring clips and motor mounts are 3D-printable.

Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor

Echolocation projects typically rely on inexpensive distance sensors and the human brain to do most of the processing. The team creating SNAP: Augmented Echolocation are using much stronger computational power to translate robotic vision into a 3D soundscape.

The SNAP team starts with an Intel RealSense R200. The first part of the processing happens here because it outputs a depth map which takes the heavy lifting out of robotic vision. From here, an AAEON Up board, packaged with the RealSense, takes the depth map and associates sound with the objects in the field of view.

Binaural sound generation is a feat in itself and works on the principle that our brains process incoming sound from both ears to understand where a sound originates. Our eyes do the same thing. We are bilateral creatures so using two ears or two eyes to understand our environment is already part of the human operating system.

In the video after the break, we see a demonstration where the wearer doesn’t need to move his head to realize what is happening in front of him. Instead of a single distance reading, where the wearer must systematically scan the area, the wearer simply has to be pointed the right way.

Another Assistive Technology entry used the traditional ultrasonic distance sensor instead of robotic vision. There is even a version out there for augmented humans with magnet implants covered in Cyberpunk Yourself called Bottlenose.

Continue reading “Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor”

Intel’s Vision For Single Board Computers Is To Have Better Vision

At the Bay Area Maker Faire last weekend, Intel was showing off a couple of sexy newcomers in the Single Board Computer (SBC) market. It’s easy to get trapped into thinking that SBCs are all about simple boards with a double-digit price tag like the Raspberry Pi. How can you compete with a $35 computer that has a huge market share and a gigantic community? You compete by appealing to a crowd not satisfied with these entry-level SBCs, and for that Intel appears to be targeting a much higher-end audience that needs computer vision along with the speed and horsepower to do something meaningful with it.

I caught up with Intel’s “Maker Czar”, Jay Melican, at Maker Faire Bay Area last weekend. A year ago, it was a Nintendo Power Glove controlled quadcopter that caught my eye. This year I only had eyes for the two new computing modules on offer, the Joule and the Euclid. They both focus on connecting powerful processors to high-resolution cameras and using a full-blown Linux operating system for the image processing. But it feels like the Joule is meant more for your average hardware hacker, and the Euclid for software engineers who are pointing their skills at robots but don’t want to get bogged down in first-principles of hardware. Before you rage about this in the comments, let me explain.

Continue reading “Intel’s Vision For Single Board Computers Is To Have Better Vision”

Using RealSense Cameras With OS X And Linux

The original Microsoft Kinect was a revolution in computer vision. For less than one hundred dollars, the Kinect gave everyone a webcam with a depth sensor. If you’re doing anything with robots, 3D scanning, or anything else where a computer needs to know where it is in 3D space, it’s awesome. These depth-mapping cameras have improved over the years, with the latest and most capable hardware being Intel’s RealSense 3D camera.

Despite being a very capable depth camera, support for Linux and OS X doesn’t exist. Researchers, roboticists and IoT developers are slightly miffed about this, and it seems like Intel doesn’t care about people using their hardware on platforms that aren’t Windows.

Now, finally, that’s changed. A few developers have taken it upon themselves to build a cross-platform library for the F200, SR300, and R200 Intel RealSense depth cameras.

The librealsense library features proper RealSense camera support for Linux, OS X, and Windows and provides all the functionality of the official Intel SDK. This functionality includes native depth, color, and infrared streams, synthetic streams for rectified images, calibration information, and the most interesting feature: multi-camera capture.

The hardware required to use the RealSense camera is somewhat lightweight – any recent laptop should be able to capture depth images with a RealSense camera. The camera itself requires USB 3, though, so you won’t be building a 3D scanner with a RealSense camera and a Raspberry Pi quite yet. Still, it’s the latest advancement for giving robots 3D vision and building cheap, portable 3D scanners.