Unlocking Drones With Go

Looking for a first project in a relatively new language that’ll stretch your abilities? [Ron] was, so he hacked a commercially available drone and opened up a lot of its functionality, while writing the client software in Go.

The drone is a DJI Tello, which has some impressive hardware like a 14-core Intel processor and excellent video processing abilities. There’s also a vibrant community and a lot of support, making it the ideal platform for a project like this. It communicates to a base station via WiFi, and using some tools like the Wireshark [Rob] was able to decipher a lot of the communications and create a whole new driver for the drone. While the drone can be controlled in the traditional way, users can also write programs to control the drone as well.

The project is both an impressive feat in reverse engineering an inexpensive drone, and a fun example of programming in the Go language. Because of the fun and excitement of drones, they have become a popular platform on which to hack, from increasing their range to becoming a platform for developing AI.

3D Drone Video

If you enjoy flying quadcopters, it is a good bet that you’ll have a drone with a camera. It used to be enough to record a video for later viewing, but these days you really want to see a live stream. The really cool setups have goggles so you can feel like you are actually in the cockpit. [Andi2345] decided to go one step further and build a drone that streams 3D video. You can see a video of the system, below.

Outdoors, there’s probably not a lot of advantage to having a 3D view, but it ought to be great for a small indoor drone. The problem is, of course, a small drone doesn’t have a lot of capacity for two cameras. The final product uses two cameras kept in sync with a sync separator IC and a microcontroller, while an analog switch intersperses the frames.

On the viewing side, a USB frame grabber and a Raspberry Pi splits the images again. At first, the system used an LCD screen married with a Google Cardboard-style goggle, but eventually, this became a custom Android application.

Continue reading “3D Drone Video”

Recharging Drones On The Go With A Supercharger

If Techcrunch is to be believed, our skies will soon be filled with delivery robots, ferrying tacos and Chinese food and Amazon purchases from neighborhood-area dispatch stations to your front door. All of this is predicated on the ability of quadcopters to rapidly recharge their batteries, or at the very least swap out batteries automatically.

For their Hackaday Prize entry, [frasanz], [ferminduaso], and [david canas] are building the infrastructure that will make delivery drones possible. It’s a drone supercharger, or a robot that grabs a drone, swaps out the battery, and sends it off to deliver whatever is in its cargo compartment.

This build is a droneport of sorts, designed to have a drone land on it, have a few stepper motors and movable arms spring into action, and replace the battery with a quick-change mechanism. This can be significantly more difficult than it sounds — you need to grab the drone and replace the battery, something that’s easy for human eyes and hands, but much harder for a few sensors and aluminum extrusion.

To change batteries, the team is just letting the drone land somewhere on a platform that’s a few feet square. Arms then move it, pushing the drone to the center, and a second arm then moves in to swap the battery. The team is using an interesting locking cam solution to clamp the battery to the drone. It’s much easier for a machine to connect than the standard XT-60 connector found on race quads.

Is this the project the world needs? Quite possibly so. Drones are going to be awesome once battery life improves. Until then, we’ll have to live with limited flight times and drone superchargers.

Continue reading “Recharging Drones On The Go With A Supercharger”

An Autonomous Drone For Working Rare Squares

Amateur radio is an extremely broad church when it comes to the numerous different activities that it covers. Most of the stories featuring radio amateurs that we cover here have involved home-made radios, but that represents a surprisingly small subset of licence holders.

One activity that captivates many operators is grid square collecting. The map is divided into grid squares, can you make contact with all of them? Land-based squares in Europe and North America are easy, those in some more sparsely populated regions a little less so, and some squares out in the ocean are nigh-on impossible. As an attempt to solve this problem, the Jupiter Research Foundation Amateur Radio Club have put an HF transceiver and associated electronics in a WaveGlider autonomous seagoing vehicle. The idea is that it will traverse the ocean, and you can work it, thus getting the contact you require to add those rarest of grid squares to your list.

The transceiver in question is a commercial portable one, an Elecraft KX3, and the brain of the payload is a Raspberry PI. It’s operating the FT8 mode, and will respond to a call on 14074 kHz in an automated fashion (Or it would, were its status page not telling us that it is offline due to power issues). It’s currently somewhere in the Pacific ocean, having been at sea now for a couple of months.

We spotted this through a spirited online discussion as to whether working an automated station is really a proper contact at all, with one amateur commenting that it might be a way for him to keep on going post mortem. But the ethics of the contact aside, it’s an extremely interesting project and one we hope eventually will come back online.

Thanks Sotabeams, via [AE5X].

There’s Now A New MIDI Spec, And Drones

MIDI, the Musical Instrument Digital Interface, was released in 1983 in a truly bizarre association between musical instrument manufacturers. At no other time, before or since, has there been such cooperation between different manufacturers to define a standard. Since then, the MIDI spec has been expanded with SysEx messages, the ability to dump samples via MIDI, redefining the tuning of instruments via MIDI to support non-Western music, and somewhere deep in the spec, karaoke machines.

Now there’s a new update to the MIDI spec (Gearnews link, here’s the official midi.org announcement but their website requires registration and is a hot garbage fire). At this year’s NAMM, the place where MIDI was first demonstrated decades ago,  the MIDI Manufacturers Association announced an update to MIDI that makes instruments and controllers smarter, and almost self-learning.

There are three new bits to the new update to the MIDI spec. The first is Profile Configuration, a way to auto-configure complex controller mappings, described as, ‘MIDI Learn on steroids’. The second update is Property Exchange, and allows MIDI devices to set device properties like, ‘product name, configuration settings, controller names, and patch data’. This is effectively setting metadata in controllers and devices. The third new bit is Protocol Negotiation, a way to automatically push future, next-gen protocols over a DIN-5 connector.

What does this all mean? Drones. No, I’m serious. The MIDI association is tinkering around with some Tiny Whoops and Phantoms, and posted a video of drones being controlled by a MIDI controller. Play a glissando up, and the drone goes up. You can check out a video of that below.

Continue reading “There’s Now A New MIDI Spec, And Drones”

MIT Breaks Autonomous Drone Speed Limits By Not Sweating Obstacles

How does one go about programming a drone to fly itself through the real world to a location without crashing into something? This is a tough problem, made even tougher if you’re pushing speeds higher and high. But any article with “MIT” implies the problems being engineered are not trivial.

The folks over at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have put their considerable skill set to work in tackling this problem. And what they’ve come up with is (not surprisingly) quite clever: they’re embracing uncertainty.

Why Is Autonomous Navigation So Hard?

Suppose we task ourselves with building a robot that can insert a key into the ignition switch of a motor vehicle and start the engine, and could do so in roughly the same time-frame that a human could do — let’s say 10 seconds. It may not be an easy robot to create, but we can all agree that it is very doable. With foreknowledge of the coordinate information of the vehicle’s ignition switch relative to our robotic arm, we can place the key in the switch with 100% accuracy. But what if we wanted our robot to succeed in any car with a standard ignition switch?

Now the location of the ignition switch will vary slightly (and not so slightly) for each model of car. That means we’re going to have to deal with this in real time and develop our coordinate system on the fly. This would not be too much of an issue if we could slow down a little. But keeping the process limited to 10 seconds is extremely difficult, perhaps impossible. At some point, the amount of environment information and computation becomes so large that the task becomes digitally unwieldy.

This problem is analogous to autonomous navigation. The environment is always changing, so we need sensors to constantly monitor the state of the drone and its immediate surroundings. If the obstacles become too great, it  creates another problem that lies in computational abilities… there is just too much information to process. The only solution is to slow the drone down. NanoMap is a new modeling method that breaks the artificial speed limit normally imposed with on-the-fly environment mapping.

Continue reading “MIT Breaks Autonomous Drone Speed Limits By Not Sweating Obstacles”

DroNet: learning to fly by driving

Delivery Drones Can Learn From Driving And Cycling

Increasingly these days drones are being used for urban surveillance, delivery, and examining architectural structures. To do this autonomously often involves using “map-localize-plan” techniques wherein first, the location is determined on a map using GPS, and then based on that, control commands are produced.

A neural network that does steering and collision prediction can compliment the map-localize-plan techniques. However, the neural network needs to be trained using video taken from actual flying drones. But generating that training video involves many hours of flying drones at street level putting vehicles and pedestrians at risk. To train their DroNet, Researchers from the University of Zurich and the Universidad Politecnica de Madrid have come up with safer sources for that video, video recorded from driving cars and bicycles.

DroNet
DroNet

For the drone steering predictions, they used over 70,000 images and corresponding steering angles from the publically available car driving data from Udacity’s Open Source Self-Driving project. For the collision predictions, they mounted a GoPro camera to the handlebars of a bicycle and drove around a city. Video recording began when the bicycle was distant from an object and stopped when very close to the object. In total, they collected 32,000 images.

To use the trained network, images from the drone’s forward-facing camera were fed into the network and the output was a steering angle and a probability of collision, which was turned into a velocity. The drone remained at a constant height above ground, though it did work well from 1.5 meters to 5 meters up. It successfully navigated road lanes and avoided moving pedestrians and bicycles. Intersections did confuse it though, likely due to the open spaces messing with the collision predictions. But we think that shouldn’t be a problem when paired with map-localize-plan techniques as a direction to move through the intersection would be chosen for it using the location on the map.

As you can see in the video below, it not only does a decent job of flying down lanes but it also flies well in a parking garage and a hallway, even though it wasn’t trained for either of these.

Continue reading “Delivery Drones Can Learn From Driving And Cycling”