MIT Breaks Autonomous Drone Speed Limits By Not Sweating Obstacles

How does one go about programming a drone to fly itself through the real world to a location without crashing into something? This is a tough problem, made even tougher if you’re pushing speeds higher and high. But any article with “MIT” implies the problems being engineered are not trivial.

The folks over at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have put their considerable skill set to work in tackling this problem. And what they’ve come up with is (not surprisingly) quite clever: they’re embracing uncertainty.

Why Is Autonomous Navigation So Hard?

Suppose we task ourselves with building a robot that can insert a key into the ignition switch of a motor vehicle and start the engine, and could do so in roughly the same time-frame that a human could do — let’s say 10 seconds. It may not be an easy robot to create, but we can all agree that it is very doable. With foreknowledge of the coordinate information of the vehicle’s ignition switch relative to our robotic arm, we can place the key in the switch with 100% accuracy. But what if we wanted our robot to succeed in any car with a standard ignition switch?

Now the location of the ignition switch will vary slightly (and not so slightly) for each model of car. That means we’re going to have to deal with this in real time and develop our coordinate system on the fly. This would not be too much of an issue if we could slow down a little. But keeping the process limited to 10 seconds is extremely difficult, perhaps impossible. At some point, the amount of environment information and computation becomes so large that the task becomes digitally unwieldy.

This problem is analogous to autonomous navigation. The environment is always changing, so we need sensors to constantly monitor the state of the drone and its immediate surroundings. If the obstacles become too great, it  creates another problem that lies in computational abilities… there is just too much information to process. The only solution is to slow the drone down. NanoMap is a new modeling method that breaks the artificial speed limit normally imposed with on-the-fly environment mapping.

Continue reading “MIT Breaks Autonomous Drone Speed Limits By Not Sweating Obstacles”

Kinect And Raspberry Pi Add Focus Pulling To DSLR

Prosumer DSLRs have been a boon to the democratization of digital media. Gear that once commanded professional prices is now available to those on more modest budgets. Not only has this unleashed a torrent of online content, it has also started a wave of camera hacks and accessories, like this automatic focus puller based on a Kinect and a Raspberry Pi.

For [Tom Piessens], the Canon EOS 5D has been a solid platform but suffers from a problem. The narrow depth of field possible with DSLRs makes it difficult to maintain focus on subjects that are moving relative to the camera, making follow-focus scenes like this classic hard to reproduce. Aiming for a better system than the stock autofocus, [Tom] grafted a Kinect sensor and a stepper motor actuator to a Raspberry Pi, and used the Kinect’s depth map to drive the focus ring. Parts are laser-cut, including a nice enclosure for the Pi and display that makes the whole thing reasonably portable. The video below shows the focus remaining locked on a selected region of interest. It seems like movement along only one axis is allowed; we’d love to see this system expanded to follow a designated object no matter where it moves in the frame.

If you’re in need of a follow-focus rig but don’t have a geared lens, check out these 3D-printed lens gears. They’d be a great complement to this backwoods focus-puller.

Continue reading “Kinect And Raspberry Pi Add Focus Pulling To DSLR”

3D Scanning By Calculating The Focus Of Each Pixel

calculating-focus-to-generate-depth-map

We understand the concept [Jean] used to create a 3D scan of his face, but the particulars are a bit beyond our own experience. He is not using a dark room and laser line to capture slices which can be reassembled later. Nope, this approach uses pictures taken with several different focal lengths.

The idea is to process the photos using luminance. It looks at a pixel and it’s neighbors, subtracting the luminance and summing the absolute values to estimate how well that pixel is in focus. Apparently if you do this with the entire image, and a set of other images taken from the same vantage point with different focal lengths, you end up with a depth map of pixels.

What we find most interesting about this is the resulting pixels retain their original color values. So after removing the cruft you get a 3D scan that is still in full color.

If you want to learn more about laser-based 3D scanning check out this project.

[Thanks Luca]