‘Radar’ Glasses Grant Vision-free Distance Sensing

[tpsully]’s Radar Glasses are designed as a way of sensing the world without the benefits of normal vision. They consist of a distance sensor on the front and a vibration motor mounted to the bridge for haptic feedback. The little motor vibrates in proportion to the sensor’s readings, providing hands-free and intuitive feedback to the wearer. Inspired in part by his own experiences with temporary blindness, [tpsully] prototyped the glasses from an accessibility perspective.

The sensor is a VL53L1X time-of-flight sensor, a LiDAR sensor that measures distances with the help of pulsed laser light. The glasses do not actually use RADAR (which is radio-based), but the operation is in a sense quite similar.

The VL53L1X has a maximum range of up to 4 meters (roughly 13 feet) in a relatively narrow field of view. A user therefore scans their surroundings by sweeping their head across a desired area, feeling the vibration intensity change in response, and allowing them to build up a sort of mental depth map of the immediate area. This physical scanning resembles RADAR antenna sweeps, and serves essentially the same purpose.

There are some other projects with similar ideas, such as the wrist-mounted digital white cane and the hip-mounted Walk-Bot which integrates multiple angles of sensing, but something about the glasses form factor seems attractively intuitive.

Thanks to [Daniel] for the tip, and remember that if you have something you’d like to let us know about, the tips line is where you can do that.

FedEx Robot Solves Complex Packing Problems

Despite the fact that it constantly seems like we’re in the midst of a robotics- and artificial intelligence-driven revolution, there are a number of tasks that continue to elude even the best machine learning algorithms and robots. The clothing industry is an excellent example, where the flimsy materials can easily trip up robotic manipulators. But one task like this that seems like it might soon be solve is packing cargo into trucks, as FedEx is trying to do with one of their new robots.

Part of the reason this task is so difficult is that packing problems, similar to “traveling salesman” problems, are surprisingly complex. The packages are not presented to the robot in any particular order, and need to be efficiently placed according to weight and size. This robot, called DexR, uses artificial intelligence paired with an array of sensors to get an idea of each package’s dimensions, which allows it to then plan stacking and ordering configurations and ensure secure fits between all of the other packages. The robot must also be capable of quickly adapting if any packages shift during stacking and re-order or re-stack them.

As robotics platforms and artificial intelligence continue to improve, it’s likely we’ll see a flurry of complex problems like these solved by machine instead of by human. Tackling real-world tasks are often more complex than they seem, as anyone with a printer an a PC LOAD LETTER error can attest to, even handling single sheets of paper can be a difficult task for a robot. Interfacing with these types of robots can be a walk in the park, though, provided you read the documentation first.

Machine Learning Robot Runs Arduino Uno

When we think about machine learning, our minds often jump to datacenters full of sweating, overheating GPUs. However, lighter-weight hardware can also be used to these ends, as demonstrated by [Nikodem Bartnik] and his latest robot.

The robot is charged with autonomously navigating a simple racetrack delineated by cardboard barriers. The robot is based on a two-wheeled design with tank-style steering. Controlled by an Arduino Uno, the robot uses a Slamtec RPLIDAR sensor to help map out its surroundings. The microcontroller is also armed with a Bluetooth link and an SD card for storage.

The robot was first driven around the racetrack multiple times under manual control, all the while collecting LIDAR data. This data was combined with control inputs to help create a data set that could be used to train a machine learning model. Feature selection techniques were used to refine down the data points collected to those most relevant to completing the driving task. [Nikodem] explains how the model was created and then refined to drive the robot by itself in a variety of race track designs.

It’s a great primer on machine learning techniques applied to a small embedded platform.

Continue reading “Machine Learning Robot Runs Arduino Uno”

Exploring Tropical Rainforest Stratification Using Space-Based LiDAR

GEDI is deployed on the the Japanese Experiment Module – Exposed Facility (JEM-EF). The highlighted box shows the location of GEDI on the JEM-EF.
GEDI is deployed on the the Japanese Experiment Module – Exposed Facility (JEM-EF). The highlighted box shows the location of GEDI on the JEM-EF.

Even though it may seem like we have already explored every single square centimeter of the Earth, there are still many areas that are practically unmapped. These areas include the bottom of the Earth’s oceans, but also the canopy of the planet’s rainforests. Rather having herds of explorers clamber around in the upper reaches of these forests to take measurements, researchers decided to use LiDAR to create a 3D map of these forests (press release).

The resulting GEDI (Global Ecosystem Dynamics Investigation) NASA project includes a triple-laser-based LiDAR system that was launched to the International Space Station in late 2018 by CRS-16 where it has fulfilled its two-year mission which began in March of 2019. Included in the parameters recorded this way are surface topography, canopy height metrics, canopy cover metrics and vertical structure metrics.

Originally, the LiDAR scanner was supposed to be decommissioned by stuffing it into the trunk of a Dragon craft before its deorbit, but after NASA found a way to scoot the scanner over to make way for a DOD payload, the project looks to resume scanning the Earth’s forests next year, where it can safely remain until the ISS is deorbited in 2031. Courtesy of the ISS’s continuous orbiting of the Earth, it’ll enable daily monitoring of its rainforests in particular, which gives us invaluable information about the ecosystems they harbor, as well as whether they’re thriving or not.

Hopefully after its hibernation period the orbital LiDAR scanner will be back in action, as the instrument is subjected to quite severe temperature changes in its storage location. Regardless, putting LiDAR scanners in orbit has to be one of those amazing ideas to help us keep track of such simple things as measuring the height of trees and density of foliage.

No Moving Parts LiDAR

Self-driving cars often use LiDAR — think of it as radar using light beams. One limitation of existing systems is they need some method of scanning the light source around, and that means moving parts. Researchers at the University of Washington have created a laser on a chip that uses acoustic waves to bend the laser, avoiding physically moving parts. The paper is behind a paywall, but the University has a summary poster, and you can also find an overview over on [Geekwire].

The resulting IC uses surface acoustic waves and can image objects more than 100 feet away. We would imagine this could be helpful for other applications like 3D scanning, too. The system weighs less than a conventional setup, too, so that would be valuable in drones and similar applications.

Continue reading “No Moving Parts LiDAR”

Citizen Science Finds Prehistoric Burial Mounds

What do you do when you have a lot of LiDAR data and not enough budget to slog through it? That’s the problem the Heritage Quest project was faced with — they had 600,000 LiDAR maps in the Netherlands and wanted to find burial mounds using the data. By harnessing 6,500 citizen scientists, they were able to analyze the data and locate over 1,000 prehistoric burial mounds, including many that were previously unknown, along with cart tracks, kilns, and other items of archaeological interest.

The project used Zooniverse, a site we’ve mentioned before, to help train volunteers to analyze data. The project had at least 15 volunteers examining each map. The sites date between 2,800 and 500 BC. Archaeologists spent the summer of 2021 verifying many of these digital finds. They took samples from 300 sites and determined that 80 of them were previously unknown. They estimate that the total number of sites found by the volunteers could be as high as 1,250.

This is a great example of how modern technology is changing many fields and the power of citizen science, both topics we always want to hear more about. We’ve seen NASA tapping citizen scientists, and we’ve even seen high school students building research buoys. So if you’ve ever wanted to participate in advancing the world’s scientific knowledge, there’s never been a better time to do it.

Bicopter Phone Case Might Be Hard To Pocket, But Delivers Autonomous Selfies

Remember that “PhoneDrone” scam from a while back? With two tiny motors and props that could barely lift a microdrone, it was pretty clearly a fake, but that doesn’t mean it wasn’t a pretty good idea. Good enough, in fact, that [Nick Rehm] came up with his own version of the flying phone case, which actually works pretty well.

In the debunking collaboration between [Mark Rober], [Peter Sripol], and the indispensable [Captain Disillusion], you’ll no doubt recall that after showing that the original video was just a CGI scam, they went on to build exactly what the video purported to do. But alas, the flying phone they came up with was manually controlled. While cool enough, [Nick Rehm], creator of dRehmFlight, can’t see such a thing without wanting to make it autonomous.

To that end, [Nick] came up with the DroneCase — a bicopter design that allows the phone to hang vertically. The two rotors are on a common axis and can swivel back and forth under control of two separate micro-servos; the combination of tilt rotors and differential thrust gives the craft full aerodynamic control. A modified version of dRehmFlight runs on a Teensy, while an IMU, a lidar module, and a PX4 optical flow sensor round out the sensor suite. The lidar and flow sensor both point down; the lidar is used to sense altitude, while the flow sensor, which is basically just the guts from an optical mouse, watches for translation in the X- and Y-axes.

After a substantial amount of tuning and tweaking, the DroneCase was ready for field tests. Check out the video below for the results. It’s actually quite stable, at least as long as the batteries last. It may not be as flexible as a legit drone, but then again it probably costs a lot less, and does the one thing it does quite well without any inputs from the user. Seems like a solid win to us.

Continue reading “Bicopter Phone Case Might Be Hard To Pocket, But Delivers Autonomous Selfies”