[DJI], everyone’s favorite — but very expensive — drone company just announced the Manifold — an extremely capable high performance embedded computer for the future of aerial platforms. And guess what? It runs Ubuntu.
The unit features a quad-core ARM Cortex A-15 processor with an NVIDIA Keplar-based GPU and runs Canonical’s Ubuntu OS with support for CUDA, OpenCV and ROS. The best part is it is compatible with third-party sensors allowing developers to really expand a drone’s toolkit. The benefit of having such a powerful computer on board means you can collect and analyze data in one shot, rather than relaying the raw output down to your control hub.
And because of the added processing power and the zippy GPU, drones using this device will have new artificial intelligence applications available, like machine-learning and computer vision — Yeah, drones are going to be able to recognize and track people; it’s only a matter of time.
We wonder what this will mean for FAA regulations…
theres a new virus thats going to come out, and its robots reproducing, but i dont think thats possible with this thing, yet.
It’s the UAVRap project!
Lol!
Why Ubuntu ? Doesn’t seem like the lighter distro to put on it. I don’t know, i don’t really want to slow down computer vision because freaking Unity is loading some ugly effects on a non existent output display. Nor I want to mess with the boot sequence because I need to start a script on boot up.
I’ve been using Arch Linux so much time that I don’t see Ubuntu as a light and easy to use distro. Good for GUI only, awful when it comes to CLI. Does Ubuntu finally run systemd ?
You do realize that Ubuntu != Unity. They have gui-less versions as well and the latest server and snappy Ubuntu’s do use systemd and are incredibly fast. I suspect you are simply working from out of date information.
Yes I gave up on Ubuntu a few years ago.
Yeah, after hearing all the hype about how Ubuntu was the Messiah finally arrived to save the unwashed masses from their ignorance of linux, I decided to give it a spin after a 24 year addiction to windows since I was three years old.
I found that it had tons of issues relating to being able to utilize the hardware on my PC, it would need work to get the NIC working, the GUI wouldn’t work without help that I didn’t know how to give it, etc etc etc. It wouldn’t boot to command line, and I didn’t know how or where or why to read the log files that were supposedly hidden away in some file directory that I wasn’t familiar with.
I booted up a copy of debian, and everything went smooth. As I learned about using this new-to-me system I developed the skills to quickly deal with all those old issues I had initially run into.
And I never bothered booting up ubuntu again.
The hype about how “it just works” didn’t cut it. And for people new to linux, needing dozens of hours of research and adjustments to hopefully get it to boot wasn’t going to happen.
Of course, I have seen a buddy of mine who was also new to linux just pop in the liveCD and have everything work out of the box……
IDK
Yup, that’s Linux’s problem. If you find yourself off the well-trodden path, all of a sudden you’re in the jungle.
Ubuntu is still incredibly bloated for an embedded device. DEs aren’t the only place where performance matters.
Ubuntu Core
Ubuntu is the only platform officially supported for Robot Operating System.
Good to know
It appears to just be an NVIDIA Jetson TK1 by the specs. Ubuntu is the default distro of the board support package, so they are likely just redistributing that image (perhaps with slight changes) rather than develop and maintain their own.
I’m wondering why Minix isn’t used more often for microcontrollers, small kernel, load what you need, and be done with it.
Are you aware that LinuxCNC runs on a stripped down RTOS version of Ubuntu? Not all distros are created equal, and crashing a drone is likely a lot less expensive than crashing a CNC machine.
I use linuxcnc and love it…dead reliable, and I have never had a configuration problem..and I’m a windows user for everything else..
ubuntu runs minecraft better than windows also….
Because there was so much extra juice in quadcopter batteries … Any flight time left?
exactly what I thought. I don’t think it worth image processing for 2 min of flight. Someone add a parachute to this computer, I mean, drone.. I mean, quad, in case of low power due to extra processing.
The Jetson TK1 draws about 10W worst case, lets call it 20W. The phantom draws about 180W when flying. Assuming it’s a little bigger, call it 200W. So absolute worst case it is drawing 10% of the total power draw. It is likely a lot better than than that, so 2 minutes of flight time lost, maybe 5 with the extra weight.
You could put an SBC of your choice on a multicopter for a while now, in fact Linux-based SBCs such as the Raspberry Pi are one of the supported targets for the ArduPilot flight control software for a while, with support for all kinds of sensors. In some time probably they’ll be the main target (which currently is the PixHawk)
The ODROID boards were a huge boon to a lot of the researchers who use quads, like Vijay Kumar at UPenn. Enough power to do vision and mapping, ubuntu support out of the box, can use USB peripherals.
Not really anything new is it. http://www.emlid.com/shop/navio-4/
Strange, as what you are linking to isn’t anywere as powerfull as what the article mentions.
Tegra has 20x the compute capacity of the ODROID boards, and so 30-40x that of the raspi. If you *ever* want to do AR tags or monocular SLAM or stereovision, you are going to need a Tegra chip with the GPU core and the optimized OpenCV libraries.
>Manifold — an extremely capable
so capable they dont have a SINGLE demo :)
It’s a Jetson TK1 essentially. Look at Youtube if you want to see people showing off its power.
Jetson! Love it! Rosie and Astro would approve.
Anandtech just ran an article about the new Jetson TX1 board, and they pointed out that the Jetson TK1 board was such an easy way to get huge computer power and a ton of connectors and IO expanders already built in that a bunch of companies were just using the whole TK1 board as a production item, going into finished products. This might literally be a case around the TK1. And I’m curious what other products out there have a whole TK1 in them …
Linux is so yesterday. Now a GPU for control… that’s interesting. Every time I see what nVidia can do with their Cuda cores makes me wonder why they aren’t working on a massively parallel chip that can drop into an i7 socket.
It’s only really good for lots of little tasks most computer programs are still designed to take advantage of only one core. While multithreaded programs are on the rise GPU core is no where near good enough to take advantage.
Correct. No GPU is self-hosting because they don’t have the conveniences scalar CPUs have for running OSes and dealing with peripherals. The embedded ARM core and peripherals is what sets the Tegra TX apart from other GPUs.
The oddball is the Intel MIC processors. The upcoming Knight’s Landing will be a self-hosted CPU with a bunch of x86 compatible tiles in an on-chip network, all running Linux. The older Knight’s Corner is a PCIe add-in card like a GPU, but under the hood it runs Linux on its tiles.
I totally agree, I think in a way moore’s law of making things smaller has stopped innovation in other directions now that this is getting harder, Chip companies are looking to making chips differently and thats why we are getting products like Knight Landing.
I’ll concede that most software doesn’t require more than a couple cores, if that. But there are a few common niches that would take full advantage of as many cores as a mobo can socket, if given the opportunity. The most obvious is design and engineering software. For instance, for photo-realistic rendering, my CAD could actually max out a dual quad-core CPU. I also have a very cheap virtual wind tunnel that will max out the same way. AFAIK, the same goes for FEA and other analysis software. And all this tends to run on Windows because those same users typically also need office apps and networked accounting. Add in the fact that a lot of this goes on while mobile and it becomes really nice to have that kind of calculating power in a notebook. As things stand, PhysX software needs to be written to use the power of my laptop GPU for something other than spinning pretty CAD models. Imagine if nVidia stitched a couple hundred cores into a CPU. The software would be max-ed out instead of the hardware.
I agree video editiong programs, CAD programs and others can use GPU’s for rendering which speeds up the process 100x, I just meant in general a GPU is no good for most applications. I think GPU’s will be used more and more but they can never be a serious full replacement for a CPU.
So what’s today? Windows 10 IoT?
A bunch of years ago, NVidia was trying to make an X86 compatible CPU/GPU/chip or some sort. As I understand it, due to the way x86 licensing works, you can’t just implement support directly, you have to do a real workaround, and do dynamic recompiling or some such translation technology. Transmeta tried it, and they even had Linus Torvalds helping them, and they couldn’t do it.
Supposedly, Nvidia is trying it again, and that their 64-bit ARM architecture in their TX1 chips is poised to be the hardware configuration to do it.
Roll your own with a Jetson TK1? It’s a custom board based on the Tegra K1. Note the JST connectors and the change in connectors.
I have a suspicion that the manifold IS the Jetson TK1 inside. The 192 GPU cores is a dead giveaway. Check out the new NVIDIA Jetson TX1:
http://www.nvidia.com/object/jetson-tx1-dev-kit.html
Yes. The specs match, but it’s not the TK1 (dev board). It is the K1 (Processor). So… $100 anyone?
“Yeah, drones are going to be able to recognize and track people; it’s only a matter of time”
If you’d been at the CPSE show in Shenzen a couple of weeks back you’d see some people are already doing this in China commercially.
And Lily robotics is launching a whole product based on this. I think even the 3DRobotics ones can do it, too.
And have a look at the MeCam. It was a little project by a sole inventor, and he tried to sell it up to a big buyer. I got to see one in action … it was this teeny little quadrotor, and it streamed video right to a cell phone, and it could *totally* identify people and faces, and it was being done onboard on the quad, not in the phone.
It used a TI OMAP4 chip in it, I am pretty sure, and all the facial recognition in it is just done inside the chip. There are libraries for the DSP cores in it that are optimized to do it in a flash, so those chips can go into phones and camcorders and such.
It is just a little weird to see how for one sector, a feature can be “cutting edge”, but for another sector, it is old hat and a ton of resources were expended in implementing it … but it is hidden behind some huge corporate and licensing obscurity.
the one thing i can really see this being great for is the addition of obstacle avoidance.
How is the real time processing handled? Not at all? Kind of unsafe, isn’t it?
There will always be a microcontroller or other hard-realtime system to do all the critical control loops. This board would operate at a higher level, doing mapping and navigation, and send down movement commands to the realtime attitude controller.
How long until we have racist drones flying around shooting up all the latinos? The facial recognition doesn’t see black people yet, so you’e safe for now. My drone ises FLIR to locate optimal places to grow trees and flies around planting trees.
Where can I get one! Best thing I have read all day haha
Wow! Literally machine learning on the fly!
The Erle-Brain has been around for a while, and is open source. It uses a beaglebone black running Snappy Ubuntu and ROS.
http://erlerobotics.com/docs/Artificial_Brains/Erle-Brain/Intro.html
The LAST thing you want a drone running is Ubuntu. Just wait for some cryptic impossible to debug D-Bus and/or NetworkManager abortion to occur and CRASH.