Autonomous Quadcopter Fits In The Palm Of Your Hand

[Horiken Engineering], which is made up of engineering students at the department of aerospace at the University of Tokyo have developed an autonomous quadcopter that requires no external control — and its tiny. By using two cameras and a sonar sensor, the quadcopter is capable of flying by itself due to its ability to process the data from the on-board sensors. To do the complex data processing fast enough to fly, it is using a Cortex-M4 MCU, a Spartan-6 FPGA, and 64MBs of DDRSDRAM. It also has the normal parts of a quadcopter, plus gyros, a 3D printed frame and a 3-axis compass. The following video demonstrates the quadcopter’s tracking ability above a static image (or a way point). The data you see in real-time is only the flight log, as the quadcopter receives no signal — it can only transmit data.

Is this the first step towards Amazon’s fleet of package delivering drones? It’s certainly going to be interesting when quadcopters are a common occurrence in public…

24 thoughts on “Autonomous Quadcopter Fits In The Palm Of Your Hand

    1. I’d bet good money that’s what the FPGA is for, just processing the video. As for 6DOF, it’s affine transforms of a known marker. The scale of the marker is roughly distance, rotation is roughly heading, etc. It’s actually a bit fiddlier than that, but nothing that we hasn’t been done before. Still, the build as a whole is impressive. Well done.

        1. you dont have to
          anything can be a marker, problems start when you have dynamic background, now all of a sudden markers start to move independently and you get lost

          http://en.wikipedia.org/wiki/Feature_extraction

          simplest and ?most popular? seem to be corner detection, plenty of corners around, we meatbags do love geometry

          http://www.robots.ox.ac.uk/~gk/PTAM/
          http://robotics.dei.unipd.it/~pretto/index.php?mod=01_Research
          http://www.robots.ox.ac.uk/ActiveVision/Research/Projects/summaryindex.html

    1. The algo only works with a stationary scene below. SIFT and SURF are the kind of thing used for consistent feature detection. THere is an opensource, patent unencumbered feature detector inside the Hugin project. Which is pretty neat all on its own…

    1. my guess would be something along the lines of using the range finder to estimate the size of the “bar code” on the ground then converting it to a vector image. searching that vector image for black squares that are in the know pattern. after that checking the orientation of the “bar code” this would give retentive position using some basic geometry.

      But that is a unqualified guess.

  1. Its able to see how its moving relative to the B&W pattern. If it loses its way as it moves (after buffer fills up wih many more samples) – then it will be globally lost but still able to orient itself locally as long as there is contrast in the visual pattern it can see below.

    If you seriously are interested in this – then here’s a python based course (incl code) and video set. Its the full SLAM with bundle adjustments etc and pretty much state of the art. Implement in your embedded CPU of choice. http://www.youtube.com/playlist?list=PLpUPoM7Rgzi_7YWn14Va2FODh7LzADBSm
    Thanks Claus Brenner.

    I’d suggest the platforms polyhelo or openPilot revolution – both of which use Cortex M4 ARM cpus with FPU. They have many dof sensors on them.

    When micropython is up and running (Feb maybe?) then you might be able to use an STM32F429 (25 USD from STM) and code it all in python directly. (FYI micropython compiles down to native C for speed).

Leave a Reply to M8RCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.