How does one go about programming a drone to fly itself through the real world to a location without crashing into something? This is a tough problem, made even tougher if you’re pushing speeds higher and high. But any article with “MIT” implies the problems being engineered are not trivial.
The folks over at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have put their considerable skill set to work in tackling this problem. And what they’ve come up with is (not surprisingly) quite clever: they’re embracing uncertainty.
Why Is Autonomous Navigation So Hard?
Suppose we task ourselves with building a robot that can insert a key into the ignition switch of a motor vehicle and start the engine, and could do so in roughly the same time-frame that a human could do — let’s say 10 seconds. It may not be an easy robot to create, but we can all agree that it is very doable. With foreknowledge of the coordinate information of the vehicle’s ignition switch relative to our robotic arm, we can place the key in the switch with 100% accuracy. But what if we wanted our robot to succeed in any car with a standard ignition switch?
Now the location of the ignition switch will vary slightly (and not so slightly) for each model of car. That means we’re going to have to deal with this in real time and develop our coordinate system on the fly. This would not be too much of an issue if we could slow down a little. But keeping the process limited to 10 seconds is extremely difficult, perhaps impossible. At some point, the amount of environment information and computation becomes so large that the task becomes digitally unwieldy.
This problem is analogous to autonomous navigation. The environment is always changing, so we need sensors to constantly monitor the state of the drone and its immediate surroundings. If the obstacles become too great, it creates another problem that lies in computational abilities… there is just too much information to process. The only solution is to slow the drone down. NanoMap is a new modeling method that breaks the artificial speed limit normally imposed with on-the-fly environment mapping.
NanoMap
All autonomous drones have a speed limit, which is dependent on the amount of obstacles it must avoid. If the forward speed is too great or the amount of obstacles are too man it becomes impossible for the drone to keep track of them all. Once this line is breached, a probability of a collision is incurred. Traditionally, if you can’t reduce obstacles you must slow the drone down to await the calculations.
MIT CSAIL’s idea is to stop trying to keep track of every single obstacle. It accepts the fact that it cannot know exactly where it is… that there is a fundamental uncertainty of position that exists for the drone in space over a period of time. NanoMap accounts for this uncertainty and attempts to keep it as low as possible. This allows a drone to operate at a much higher speed in an obstacle-rich environment while keeping the probability of a collision relatively low.
Understanding Uncertainty Is The Key
NanoMap uses forward-looking depth sensors that put together an idea of its immediate environment, creating a local 3D data structure. It then uses an algorithm to search that structure. It searches back in time to find a view from its past that resembles its current view. Basically, it gathers just enough information to know that it’s in a “certain area”, and then plans its flight path accordingly. It doesn’t attempt to calculate its exact location and orientation as other models do. It only gets the data it needs not to run into something, and isn’t concerned with exact position and location. They’re calling this idea “pose uncertainty”.
Be sure to check out the white paper for full details, but we suggest blocking out some spare time. It’s a lot to wrap your brain around. If you can do that, the determined hacker or maker can give this a try themselves; the research team’s incredible work is open-source. Let us know in the comments if you plan to use this new and exciting technology in your next autonomous project!
“Basically, it gathers just enough information to know that it’s in a “certain area”, and then plans its flight path accordingly. It doesn’t attempt to calculate its exact location and orientation.”
That’s life really, isn’t it?
Well, yes, but with about a trillion times more computing power, plus some fancy quantum effects for decision making.
That and the fact than for other than a few animals, life chooses not to live fast enough to obliterate itself if it didn’t notice an obstacle.
Birds.
Motorcyclists.
Love the effort, but I cant help to think this is over engineering! Insects do this much simpler. Use the sun for direction, The horizon for attitude and height. and contrast and pattern size to determine if they are about to fly into something. They only have QVGA non stereoscopic vision, and they manage because their flight speed is low enough to not hurt them when they hit objects. Kinects sensors and fpgas on a drone is overkill!
More maneuverable.
Quantum effects? Care to elaborate on that?
they can’t as everything I’ve seen on the “quantum brain” hypothesis is completely theoretical with no physical proof nor real would study to back it up. it honestly come off more as a last ditch effort to save the over inflated idea of “free will” by throwing buzz words at it instead of excepting we’re more deterministic then we’d like to believe.
as in the end there’s no reason to believe a quantum brain would be any less deterministic other then the “magic” most people feel that surrounds the word.
I think the phrase “Quantum effects” is just the new stylish expression for what us old school guys would call a “random number generator” pseudo or otherwise. None of that spooky action-at-a-distance quantum stuff. :-)
BTW, I’d say the jury is still out on the concept of “free will”. What we call quantum effects may be related as a component of the mind-body relationship but we really don’t know what that’s all about and I’d be suspicious of anyone going around saying they did. Can a soul do something that is outside his own nature? Some say yes, others no. I’m not sure yet. :-)
“Building a robot that can press a button on the app that turns on your electric car” ;P
a robot just need to think about starting a car and it could start
and by package delivery they obviously mean bomb delivery. bring on the big fat military bucks!
Pretty sure the one’s carrying bombs are trying to NOT avoid things? ;)
Nah, the avoid list is just smaller, just the one entry for the guy that turns it on.
It has to avoid many things to crash into the one thing it mustn’t avoid.
Get the “follow me” from DJI, fit a taser then retire all the policemen to sit back and wait for Skynet proper.
” Let us know in the comments if you plan to use this new and exciting technology in your next autonomous project!”
I think there’s a little matter of creating a drone with the needed hardware.
I think it is a rather small matter actually. Intel sells those realsense cameras that have depth cameras in them, and an entire linux SBC. They were showing off drones with the cameras strapped to them at Maker Faire Bay Area last year. I covered that in this roundup.
looks like it was the intel euclid/joule, which is discontinued? any other suggestions? (seriously; actually interested, not being snarky.)
orbbec has some options that might work for you,
even reasonably priced. thanks!
“All autonomous drones have a speed limit, which is dependent on the amount of obstacles it must avoid.”
Is that necessarily true? If the depth camera has a fixed resolution you have a fixed number of obstacles, one for each pixel.
“amount of obstacles are too man it becomes impossible”
“too many”?
“…only crashes 2% of the time.”
I could be wrong, but 2% seems awful high when it comes to the crash rate of a moving vehicle. If I had crashed my car 2% of the time, I would have lost my license 30 years ago (provided I had even survived those crashes).
It’s better than crashing 98% of the time because the algorithm hangs up on overload.
agreed, 2% is total crap. Basically means it is guaranteed to crash now and then.
You do realize that with the work being done at MIT, it means that the technology is at a TRL3 or 4 level. That means it is experimental, hasn’t been productized and isn’t out on the market. 2% failure rate at this point in development is pretty damn good. The algorithm is unrefined and is a proof of concept.
in case anyone is interested, the original URL for the research paper (which is not a white paper) is http://groups.csail.mit.edu/robotics-center/public_papers/Florence17.pdf
2% failure? hot damn i’m up to 0.495% fail rate (old motorcyclist)