Hackaday Prize Entry: A 3D Mapping Drone

Quadcopters show a world of promise, and not just in the mediums of advertising and flying Phantoms over very large crowds. They can also be used for useful things, and [Sagar]’s entry for The Hackaday Prize does just that. He’s developing a 3D mapping drone for farmers, miners, students, and anyone else who would like high-resolution 3D maps of their local terrain.

Most high-end mapping and photography work done with quadcopters these days uses heavy DSLRs to record the images that are brought back to the base station to be stitched into a 3D image. While this works, those GoPros are getting really, really good these days, and with 4k resolution, too. [Sagar] is mounting one of these to a custom quad and flying around an area to get images of an area from every angle.

To stitch the images together [Sagar] will be using the Pix4D mapping software, an impressive bit of software that will convert a multitude of still images to a 3D scene. It’s an expensive piece of software – $8500 for a perpetual license, but the software can be rented for $350/month until a FOSS alternative can be developed.


The 2015 Hackaday Prize is sponsored by:

27 thoughts on “Hackaday Prize Entry: A 3D Mapping Drone

  1. 4k isn’t actually mpressive for a mapping application. 4K is far lower resolution then what can be flown in the mirrorless camera range, which while they do weigh and cost more, produce far better imagery and enable faster flights, more coverage per photo, and less noise due to larger pixel pitch, and support higher quality optics. For less additional money then one month of Pix4D at the qoute price above you could get a 24MP, mirrorless camera that produces better stills then gopro, and does far better for the purpose of mapping.

    I’m all in favor of getting your feet wet with early testing on GoPro cameras, but in the end they are a short term gain until stepping up to a higher quality sensor (for mapping).

  2. I experimented a bit with VisualSFM (http://ccwu.me/vsfm/) which uses the SFM (structure from motion) technique, and is free to use. I have not been very successful, but then again i could still spend more time on it.

    I thought the main problem with GoPro and sorts (i use a Boscam HD19+) is the distortion from the fisheye lens, which only leaves a small area in the center usable.

    1. WordPress lost my longer comment and I ain’t writing it all again.

      Summary:

      Not GoPro’s fault: Straight lines aren’t straight.
      Train tracks close at horizon, wide near viewer, close at other horizon -> Curved appearance.

      ‘Curved’ lines make the maths for getting structure easier, because they match reality.
      But they make (straight) line detection tricky.

        1. so we don’t need another “drone”, since for instance 3DR’s APM and pixhawk already support surveying with integrated camera controls controlled with the MissionPlanner software.

          We need a software solution to create the actual mesh from video footage (or pictures), and if possible from footage taken with fisheye lenses, although not necessary.

    2. Every lens type “distorts” the image in some way. if you’ve got a correct mathematical model of your lens, you can do photogrammetry (aka. SfM, aka. SLAM aka. …) correctly, if you haven’t got one for whatever lens your camera has then you’re screwed. Most photogrammetry and/or panorama stitching programs can model fisheye lenses as easily as rectilinear.

      As for Pix4D and VisualSfM and Agisoft and Catch123D and Hypr3D / cubify, there are free alternatives and I’d risk to say some of these commercial cloud-based programs are probably just GUIs wrapping the free software implementations such as PVMS/CVMS and PPT. The opensource OpenDroneMap is also based on these I believe.

  3. GoPro’s fisheye distortion may look fine when you’re surfing, but it certainly doesn’t do any favors for mapping applications. You can easily find cheaper cameras with better resolution and better lenses.

  4. This seems like a great project for students to use to get them into mapping but for miners and farmers I don’t think this is up to the task.
    The 1 cm accuracy is fantasy. Even the best military/surveying differential-GPS is only good to 4-10 cm. If [Sagar] is using GPS to align his pictures the mosaic won’t even be internally consistent at that accuracy. Pixel size != image accuracy. Similar commercial products have more believable accuracy 1-5 m (3-16 ft).
    As cool as orthoimagery and 3D models are, contour lines and coverage layers are in many respects more useful. Most places people are mining for placer gold are in arid regions, LiDAR and contour maps from 10 years ago are accurate enough to narrow the search area.
    Any project that makes more information available is cool, this just seems oversold.

    1. Normally the images are aligned using image matching, not by GPS. 1mm of relative accuracy is certainly possible with a not too expensive setup not flying very high. Absolute accuracy is much more difficult, requiring that you calibrate control points.

    2. Using Agisoft Photoscan for numbers here due to a large amount of experience with it. Internal reconstruction errors tend to average around 1.2 pixels with Agisoft Photoscan, so you have a reconstruction error after internal alignment of 1.2 pixels. Then for world alignment you need ground control points. With the introduction of ground control points you can correctly place the stitch at the level of accuracy of your ground control data. With professional surveying equipment achieving an GCP accuracy of sub 5cm is easily doable. Assuming a GSD of 2.2cm and GCP accuracy of 5CM, you should be within 7cm’s accuracy everywhere in the orthomosiac. Our tests with GCPs that were correct within 10CM’s did confirm this.

      GCPs are required with the APM platform if you need world accuracy to compensate for baro drift (in the altitude of the craft over a long flight), as well as for a lack of feedback in where the picture was taken, which usually measures on the .2-.5 second range (which when traveling at 16m/s with a GPS that is only good within 5 meters anyway is a problem to be compensated for.

      Most SFM software (Agisoft Photoscan, Pix4D, Correlator3D) will create a dense point cloud similar to what you receive from a LiDAR system, and assuming a half decent imager will almost always exceed the point density. I;ve heard from others that this is accurate enough that the incredible price point difference of it versus LiDAR systems is compelleing enough that they have swapped. I have not yet benchmark against LiDAR systems. For reference a typical UAV LiDAR system can achieve ~100 points per square meter and costs between $55,000 to $185,000 USD.

      (And contour maps are simply a derived product from the point clouds/DSM’s that are created by any of the SFM products discussed).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.