[Nick Rehm] explains the workings of a gps-less self guided drone

Autonomous Drone Dodges Obstacles Without GPS

If you’re [Nick Rehm], you want a drone that can plan its own routes even at low altitudes with unplanned obstacles blocking its way. (Video, embedded below.) And or course, you build it from scratch.

Why? Getting a drone that can fly a path and even return home when the battery is low, signal is lost, or on command, is simple enough. Just go to your favorite retailer, search “gps drone” and you can get away for a shockingly low dollar amount. This is possible because GPS receivers have become cheap, small, light, and power efficient. While all of these inexpensive drones can fly a predetermined path, they usually do so by flying over any obstacles rather than around.

[Nick Rehm] has envisioned a quadcopter that can do all of the things a GPS-enabled drone can do, without the use of a GPS receiver. [Nick] makes this possible by using algorithms similar to those used by Google Maps, with data coming from a typical IMU, a camera for Computer Vision, LIDAR for altitude, and an Intel RealSense camera for detection of position and movement. A Raspberry Pi 4 running Robot Operating System runs the autonomous show, and a Teensy takes care of flight control duties.

What we really enjoy about [Nick]’s video is his clear presentation of complex technologies, and a great sense of humor about a project that has consumed untold amounts of time, patience, and duct tape.

We can’t help but wonder if DARPA will allow [Nick] to fly his drone in the Subterranean Challenge such as the one hosted in an unfinished nuclear power plant in 2020.

Continue reading “Autonomous Drone Dodges Obstacles Without GPS”

Computer Vision Lets You Skip Songs With A Glance

Have you ever wished you could control your home automation devices with nothing more than a withering stare? Well then you’re in luck, as [Norbert Zare] has come up with a clever way of controlling an MP3 player with only your face. Though as you might imagine, the technique could be applied to a whole range of home automation tasks with some minor tweaks.

At the core of this project is the Raspberry Pi, specifically the 3 B+ model, though with the computational demands of computer vision you might want to bump it up to the latest-and-greatest Pi 4. From there you need to load up OpenCV and a model trained for face detection, which as luck would have it, tends to be a fairly common application for this technology.

With a relatively simple Python script, [Norbert] is able to determine when OpenCV detects he’s looking directly into the camera and fire off one of the Pi’s GPIO pins that’s been connected to the “Skip” button on a physical MP3 player. That’s right, you read that correctly. He’s using a dedicated MP3 player in the year 2021.

In all seriousness, we’re not really sure why [Norbert] went this route compared to simply playing the music on the Pi and controlling it through software, but this does serve as a good example of how you can interface with physical devices if need be. In any event, using the Python script he’s provided, you could easily modify the setup to control other tasks, virtual or otherwise.

While face recognition can be a scary thing out in the wild, we do think it has some interesting applications within the home, so long as the user is the one who is in control of where their data ends up.

Continue reading “Computer Vision Lets You Skip Songs With A Glance”

Skeleton Watches You Intensely Because It’s Halloween, Okay

If you’ve ever seen a painting in which the eyes follow you around the room, you might have found that a bit uneasy. [CuriousInventor] has taken that concept further with a skeleton that literally holds a gaze on anyone in its field of view. 

The heart of the system is a Raspberry Pi Zero, fitted with a Pi Camera. Running OpenCV, code is set up to track humans and turn the skeleton’s head to face any that are detected. This is achieved via a servo in the skeleton’s neck. A servo bonnet is used to drive the servos without unnecessarily straining the Raspberry Pi.

The skeleton itself doesn’t look modified in any way, though most of the electronics are mounted inside a pretty obvious plastic box. We’d love to see a version 2 with all the hardware housed neatly inside the skull.

It’s a fun hack that makes for an enjoyable Halloween decoration. OpenCV can do other useful things, too, however, like spotting weeds. Video after the break.

Continue reading “Skeleton Watches You Intensely Because It’s Halloween, Okay”

RC car without a top, showing electronics inside.

Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road

[Andy]’s robot is an autonomous RC car, and he shares the localization algorithm he developed to help the car keep track of itself while it zips crazily around an indoor racetrack. Since a robot like this is perfectly capable of driving faster than it can sense, his localization method is the secret to pouring on additional speed without worrying about the car losing itself.

The regular pattern of ceiling lights makes a good foundation for the system to localize itself.

To pull this off, [Andy] uses a camera with a fisheye lens aimed up towards the ceiling, and the video is processed on a Raspberry Pi 3. His implementation is slick enough that it only takes about 1 millisecond to do a localization update, netting a precision on the order of a few centimeters. It’s sort of like a fast indoor GPS, using math to infer position based on the movement of ceiling lights.

To be useful for racing, this localization method needs to be combined with a map of the racetrack itself, which [Andy] cleverly builds by manually driving the car around the track while building the localization data. Once that is in place, the car has all it needs to autonomously zip around.

Interested in the nitty-gritty details? You’re in luck, because all of the math behind [Andy]’s algorithm is explained on the project page linked above, and the GitHub repository for [Andy]’s autonomous car has all the implementation details.

The system is location-dependent, but it works so well that [Andy] considers track localization a solved problem. Watch the system in action in the two videos embedded below.

Continue reading “Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road”

OAK-D Depth Sensing AI Camera Gets Smaller And Lighter

The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.

Animated face with small blue dots as 3D feature markers.
An example of real-time feature tracking, now in 3D thanks to integrated depth sensing.

The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.

The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.

So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.

There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.

Earlier this year we saw the OAK-D used in a system to visually identify weeds and estimate biomass in agriculture, and it’s exciting to see a new model being released. If you’re interested, the OAK-D Lite is available at a considerable discount during the Kickstarter campaign.

SLA printer rigged for time lapse

Silky Smooth Resin Printer Timelapses Thanks To Machine Vision

The fascination of watching a 3D printer go through its paces does tend to wear off after you spent a few hours doing it, in which case those cool time-lapse videos come in handy. Trouble is they tend to look choppy and unpleasant unless the exposures are synchronized to the motion of the gantry. That’s easy enough to do on FDM printers, but resin printers are another thing altogether.

Or are they? [Alex] found a way to make gorgeous time-lapse videos of resin printers that have to be seen to be believed. The advantage of his method is that it’ll work with any camera and requires no hardware other than a little LED throwie attached to the build platform of the printer. The LED acts as a fiducial that OpenCV can easily find in each frame, one that indicates the Z-axis position of the stage when the photo was taken. A Python program then sorts the frames, so it looks like the resin print is being pulled out of the vat in one smooth pull.

To smooth things out further, [Alex] also used frame interpolation to fill in the gaps where the build platform appears to jump between frames using real-time intermediate flow estimation, or RIFE. The details of that technique alone were worth the price of admission, and the results are spectacular. Alex kindly provides his code if you want to give this a whack; it’s almost worth buying a resin printer just to try.

Is there a resin printer in your future? If so, you might want to look over [Donald Papp]’s guide to the pros and cons of SLA compared to FDM printers.

Continue reading “Silky Smooth Resin Printer Timelapses Thanks To Machine Vision”

Lasers used to detect handprint.

DIY Laser Speckle Imaging Uncovers Hidden Details

It sure sounds like “laser speckle imaging” is the sort of thing you’d need grant money to experiment with, but as [anfractuosity] recently demonstrated, you can get some very impressive results with a relatively simple hardware setup and some common open source software packages. In fact, you might already have all the components required to pull this off in your own workshop right now and just not know it.

Anyone who’s ever played with a laser pointer is familiar with the sparkle effect observed when the beam shines on certain objects. That’s laser speckle, and it’s created by the beam reflecting off of microscopic variations in the surface texture and producing optical interference. While this phenomenon largely prevents laser beams from being effective direct lighting sources, it can be used as a way to measure extremely minute perturbations in what would appear to be an otherwise flat surface.

In this demonstration, [anfractuosity] has combined a simple red laser pointer with a microscope’s 25X objective lens to produce a wider and less intense beam. When this diffused beam is cast onto a wall, the speckle pattern generated by the surface texture can plainly be seen. What’s not obvious to the naked eye is that touching the wall with your hand actually produces a change in the speckle pattern. But if you take high-resolution before and after shots, the images can be run through OpenCV to highlight the differences and reveal a ghostly hand-print.

Continue reading “DIY Laser Speckle Imaging Uncovers Hidden Details”