Drones Are Getting A Lot Smarter

[DJI], everyone’s favorite — but very expensive — drone company just announced the Manifold — an extremely capable high performance embedded computer for the future of aerial platforms. And guess what? It runs Ubuntu.

The unit features a quad-core ARM Cortex A-15 processor with an NVIDIA Keplar-based GPU and runs Canonical’s Ubuntu OS with support for CUDA, OpenCV and ROS. The best part is it is compatible with third-party sensors allowing developers to really expand a drone’s toolkit. The benefit of having such a powerful computer on board means you can collect and analyze data in one shot, rather than relaying the raw output down to your control hub.

And because of the added processing power and the zippy GPU, drones using this device will have new artificial intelligence applications available, like machine-learning and computer vision — Yeah, drones are going to be able to recognize and track people; it’s only a matter of time.

We wonder what this will mean for FAA regulations…

49 thoughts on “Drones Are Getting A Lot Smarter

  1. Why Ubuntu ? Doesn’t seem like the lighter distro to put on it. I don’t know, i don’t really want to slow down computer vision because freaking Unity is loading some ugly effects on a non existent output display. Nor I want to mess with the boot sequence because I need to start a script on boot up.

    I’ve been using Arch Linux so much time that I don’t see Ubuntu as a light and easy to use distro. Good for GUI only, awful when it comes to CLI. Does Ubuntu finally run systemd ?

        1. Yeah, after hearing all the hype about how Ubuntu was the Messiah finally arrived to save the unwashed masses from their ignorance of linux, I decided to give it a spin after a 24 year addiction to windows since I was three years old.

          I found that it had tons of issues relating to being able to utilize the hardware on my PC, it would need work to get the NIC working, the GUI wouldn’t work without help that I didn’t know how to give it, etc etc etc. It wouldn’t boot to command line, and I didn’t know how or where or why to read the log files that were supposedly hidden away in some file directory that I wasn’t familiar with.

          I booted up a copy of debian, and everything went smooth. As I learned about using this new-to-me system I developed the skills to quickly deal with all those old issues I had initially run into.

          And I never bothered booting up ubuntu again.

          The hype about how “it just works” didn’t cut it. And for people new to linux, needing dozens of hours of research and adjustments to hopefully get it to boot wasn’t going to happen.

          Of course, I have seen a buddy of mine who was also new to linux just pop in the liveCD and have everything work out of the box……

          IDK

    1. It appears to just be an NVIDIA Jetson TK1 by the specs. Ubuntu is the default distro of the board support package, so they are likely just redistributing that image (perhaps with slight changes) rather than develop and maintain their own.

    1. The Jetson TK1 draws about 10W worst case, lets call it 20W. The phantom draws about 180W when flying. Assuming it’s a little bigger, call it 200W. So absolute worst case it is drawing 10% of the total power draw. It is likely a lot better than than that, so 2 minutes of flight time lost, maybe 5 with the extra weight.

  2. You could put an SBC of your choice on a multicopter for a while now, in fact Linux-based SBCs such as the Raspberry Pi are one of the supported targets for the ArduPilot flight control software for a while, with support for all kinds of sensors. In some time probably they’ll be the main target (which currently is the PixHawk)

    1. The ODROID boards were a huge boon to a lot of the researchers who use quads, like Vijay Kumar at UPenn. Enough power to do vision and mapping, ubuntu support out of the box, can use USB peripherals.

    1. Tegra has 20x the compute capacity of the ODROID boards, and so 30-40x that of the raspi. If you *ever* want to do AR tags or monocular SLAM or stereovision, you are going to need a Tegra chip with the GPU core and the optimized OpenCV libraries.

      1. Anandtech just ran an article about the new Jetson TX1 board, and they pointed out that the Jetson TK1 board was such an easy way to get huge computer power and a ton of connectors and IO expanders already built in that a bunch of companies were just using the whole TK1 board as a production item, going into finished products. This might literally be a case around the TK1. And I’m curious what other products out there have a whole TK1 in them …

  3. Linux is so yesterday. Now a GPU for control… that’s interesting. Every time I see what nVidia can do with their Cuda cores makes me wonder why they aren’t working on a massively parallel chip that can drop into an i7 socket.

    1. It’s only really good for lots of little tasks most computer programs are still designed to take advantage of only one core. While multithreaded programs are on the rise GPU core is no where near good enough to take advantage.

      1. Correct. No GPU is self-hosting because they don’t have the conveniences scalar CPUs have for running OSes and dealing with peripherals. The embedded ARM core and peripherals is what sets the Tegra TX apart from other GPUs.

        The oddball is the Intel MIC processors. The upcoming Knight’s Landing will be a self-hosted CPU with a bunch of x86 compatible tiles in an on-chip network, all running Linux. The older Knight’s Corner is a PCIe add-in card like a GPU, but under the hood it runs Linux on its tiles.

        1. I totally agree, I think in a way moore’s law of making things smaller has stopped innovation in other directions now that this is getting harder, Chip companies are looking to making chips differently and thats why we are getting products like Knight Landing.

      2. I’ll concede that most software doesn’t require more than a couple cores, if that. But there are a few common niches that would take full advantage of as many cores as a mobo can socket, if given the opportunity. The most obvious is design and engineering software. For instance, for photo-realistic rendering, my CAD could actually max out a dual quad-core CPU. I also have a very cheap virtual wind tunnel that will max out the same way. AFAIK, the same goes for FEA and other analysis software. And all this tends to run on Windows because those same users typically also need office apps and networked accounting. Add in the fact that a lot of this goes on while mobile and it becomes really nice to have that kind of calculating power in a notebook. As things stand, PhysX software needs to be written to use the power of my laptop GPU for something other than spinning pretty CAD models. Imagine if nVidia stitched a couple hundred cores into a CPU. The software would be max-ed out instead of the hardware.

        1. I agree video editiong programs, CAD programs and others can use GPU’s for rendering which speeds up the process 100x, I just meant in general a GPU is no good for most applications. I think GPU’s will be used more and more but they can never be a serious full replacement for a CPU.

    2. A bunch of years ago, NVidia was trying to make an X86 compatible CPU/GPU/chip or some sort. As I understand it, due to the way x86 licensing works, you can’t just implement support directly, you have to do a real workaround, and do dynamic recompiling or some such translation technology. Transmeta tried it, and they even had Linus Torvalds helping them, and they couldn’t do it.

      Supposedly, Nvidia is trying it again, and that their 64-bit ARM architecture in their TX1 chips is poised to be the hardware configuration to do it.

  4. “Yeah, drones are going to be able to recognize and track people; it’s only a matter of time”

    If you’d been at the CPSE show in Shenzen a couple of weeks back you’d see some people are already doing this in China commercially.

    1. And Lily robotics is launching a whole product based on this. I think even the 3DRobotics ones can do it, too.

      And have a look at the MeCam. It was a little project by a sole inventor, and he tried to sell it up to a big buyer. I got to see one in action … it was this teeny little quadrotor, and it streamed video right to a cell phone, and it could *totally* identify people and faces, and it was being done onboard on the quad, not in the phone.

      It used a TI OMAP4 chip in it, I am pretty sure, and all the facial recognition in it is just done inside the chip. There are libraries for the DSP cores in it that are optimized to do it in a flash, so those chips can go into phones and camcorders and such.

      It is just a little weird to see how for one sector, a feature can be “cutting edge”, but for another sector, it is old hat and a ton of resources were expended in implementing it … but it is hidden behind some huge corporate and licensing obscurity.

    1. There will always be a microcontroller or other hard-realtime system to do all the critical control loops. This board would operate at a higher level, doing mapping and navigation, and send down movement commands to the realtime attitude controller.

  5. How long until we have racist drones flying around shooting up all the latinos? The facial recognition doesn’t see black people yet, so you’e safe for now. My drone ises FLIR to locate optimal places to grow trees and flies around planting trees.

Leave a Reply to SOI SentinelCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.