Single Rotor Drone Spins For 360 Lidar Scanning

Multiple motors or servos are the norm for drones to achieve controllable flight, but a team from MARS LAB HKU was able to a 360° lidar scanning drone with full control on just a single motor and no additional actuators. Video after the break.

The key to controllable flight is the swashplateless propeller design that we’ve seen a few times, but it always required a second propeller to counteract self-rotation. In this case, the team was able to make that self-rotation work so that they could achieve 360° scanning with a single fixed LIDAR sensor. Self-rotation still needs to be slowed, so this was done with four stationary vanes. The single rotor also means better efficiency compared to a multi-rotor with similar propeller disk area.

The LIDAR comprises a full 50% of the drone’s weight and provides a conical FOV out to a range of 450m. All processing happens onboard the drone, with point cloud data being processed by a LIDAR-inertial odometry framework. This allows the drone to track and plan its flight path while also building a 3D map of an unknown environment. This means it would be extremely useful for indoor or underground environments where GPS or other positioning systems are not available.

All the design files and code for the drone are up on GitHub, and most of the electronic components are off-the-shelf. This means you can build your own, and the expensive lidar sensor is not required to get it flying. This seems like a great platform for further experimentation, and getting usable video from a normal camera would be an interesting challenge. Continue reading “Single Rotor Drone Spins For 360 Lidar Scanning”

‘Radar’ Glasses Grant Vision-free Distance Sensing

[tpsully]’s Radar Glasses are designed as a way of sensing the world without the benefits of normal vision. They consist of a distance sensor on the front and a vibration motor mounted to the bridge for haptic feedback. The little motor vibrates in proportion to the sensor’s readings, providing hands-free and intuitive feedback to the wearer. Inspired in part by his own experiences with temporary blindness, [tpsully] prototyped the glasses from an accessibility perspective.

The sensor is a VL53L1X time-of-flight sensor, a LiDAR sensor that measures distances with the help of pulsed laser light. The glasses do not actually use RADAR (which is radio-based), but the operation is in a sense quite similar.

The VL53L1X has a maximum range of up to 4 meters (roughly 13 feet) in a relatively narrow field of view. A user therefore scans their surroundings by sweeping their head across a desired area, feeling the vibration intensity change in response, and allowing them to build up a sort of mental depth map of the immediate area. This physical scanning resembles RADAR antenna sweeps, and serves essentially the same purpose.

There are some other projects with similar ideas, such as the wrist-mounted digital white cane and the hip-mounted Walk-Bot which integrates multiple angles of sensing, but something about the glasses form factor seems attractively intuitive.

Thanks to [Daniel] for the tip, and remember that if you have something you’d like to let us know about, the tips line is where you can do that.

FedEx Robot Solves Complex Packing Problems

Despite the fact that it constantly seems like we’re in the midst of a robotics- and artificial intelligence-driven revolution, there are a number of tasks that continue to elude even the best machine learning algorithms and robots. The clothing industry is an excellent example, where the flimsy materials can easily trip up robotic manipulators. But one task like this that seems like it might soon be solve is packing cargo into trucks, as FedEx is trying to do with one of their new robots.

Part of the reason this task is so difficult is that packing problems, similar to “traveling salesman” problems, are surprisingly complex. The packages are not presented to the robot in any particular order, and need to be efficiently placed according to weight and size. This robot, called DexR, uses artificial intelligence paired with an array of sensors to get an idea of each package’s dimensions, which allows it to then plan stacking and ordering configurations and ensure secure fits between all of the other packages. The robot must also be capable of quickly adapting if any packages shift during stacking and re-order or re-stack them.

As robotics platforms and artificial intelligence continue to improve, it’s likely we’ll see a flurry of complex problems like these solved by machine instead of by human. Tackling real-world tasks are often more complex than they seem, as anyone with a printer an a PC LOAD LETTER error can attest to, even handling single sheets of paper can be a difficult task for a robot. Interfacing with these types of robots can be a walk in the park, though, provided you read the documentation first.

Machine Learning Robot Runs Arduino Uno

When we think about machine learning, our minds often jump to datacenters full of sweating, overheating GPUs. However, lighter-weight hardware can also be used to these ends, as demonstrated by [Nikodem Bartnik] and his latest robot.

The robot is charged with autonomously navigating a simple racetrack delineated by cardboard barriers. The robot is based on a two-wheeled design with tank-style steering. Controlled by an Arduino Uno, the robot uses a Slamtec RPLIDAR sensor to help map out its surroundings. The microcontroller is also armed with a Bluetooth link and an SD card for storage.

The robot was first driven around the racetrack multiple times under manual control, all the while collecting LIDAR data. This data was combined with control inputs to help create a data set that could be used to train a machine learning model. Feature selection techniques were used to refine down the data points collected to those most relevant to completing the driving task. [Nikodem] explains how the model was created and then refined to drive the robot by itself in a variety of race track designs.

It’s a great primer on machine learning techniques applied to a small embedded platform.

Continue reading “Machine Learning Robot Runs Arduino Uno”

Exploring Tropical Rainforest Stratification Using Space-Based LiDAR

GEDI is deployed on the the Japanese Experiment Module – Exposed Facility (JEM-EF). The highlighted box shows the location of GEDI on the JEM-EF.
GEDI is deployed on the the Japanese Experiment Module – Exposed Facility (JEM-EF). The highlighted box shows the location of GEDI on the JEM-EF.

Even though it may seem like we have already explored every single square centimeter of the Earth, there are still many areas that are practically unmapped. These areas include the bottom of the Earth’s oceans, but also the canopy of the planet’s rainforests. Rather having herds of explorers clamber around in the upper reaches of these forests to take measurements, researchers decided to use LiDAR to create a 3D map of these forests (press release).

The resulting GEDI (Global Ecosystem Dynamics Investigation) NASA project includes a triple-laser-based LiDAR system that was launched to the International Space Station in late 2018 by CRS-16 where it has fulfilled its two-year mission which began in March of 2019. Included in the parameters recorded this way are surface topography, canopy height metrics, canopy cover metrics and vertical structure metrics.

Originally, the LiDAR scanner was supposed to be decommissioned by stuffing it into the trunk of a Dragon craft before its deorbit, but after NASA found a way to scoot the scanner over to make way for a DOD payload, the project looks to resume scanning the Earth’s forests next year, where it can safely remain until the ISS is deorbited in 2031. Courtesy of the ISS’s continuous orbiting of the Earth, it’ll enable daily monitoring of its rainforests in particular, which gives us invaluable information about the ecosystems they harbor, as well as whether they’re thriving or not.

Hopefully after its hibernation period the orbital LiDAR scanner will be back in action, as the instrument is subjected to quite severe temperature changes in its storage location. Regardless, putting LiDAR scanners in orbit has to be one of those amazing ideas to help us keep track of such simple things as measuring the height of trees and density of foliage.

No Moving Parts LiDAR

Self-driving cars often use LiDAR — think of it as radar using light beams. One limitation of existing systems is they need some method of scanning the light source around, and that means moving parts. Researchers at the University of Washington have created a laser on a chip that uses acoustic waves to bend the laser, avoiding physically moving parts. The paper is behind a paywall, but the University has a summary poster, and you can also find an overview over on [Geekwire].

The resulting IC uses surface acoustic waves and can image objects more than 100 feet away. We would imagine this could be helpful for other applications like 3D scanning, too. The system weighs less than a conventional setup, too, so that would be valuable in drones and similar applications.

Continue reading “No Moving Parts LiDAR”

Citizen Science Finds Prehistoric Burial Mounds

What do you do when you have a lot of LiDAR data and not enough budget to slog through it? That’s the problem the Heritage Quest project was faced with — they had 600,000 LiDAR maps in the Netherlands and wanted to find burial mounds using the data. By harnessing 6,500 citizen scientists, they were able to analyze the data and locate over 1,000 prehistoric burial mounds, including many that were previously unknown, along with cart tracks, kilns, and other items of archaeological interest.

The project used Zooniverse, a site we’ve mentioned before, to help train volunteers to analyze data. The project had at least 15 volunteers examining each map. The sites date between 2,800 and 500 BC. Archaeologists spent the summer of 2021 verifying many of these digital finds. They took samples from 300 sites and determined that 80 of them were previously unknown. They estimate that the total number of sites found by the volunteers could be as high as 1,250.

This is a great example of how modern technology is changing many fields and the power of citizen science, both topics we always want to hear more about. We’ve seen NASA tapping citizen scientists, and we’ve even seen high school students building research buoys. So if you’ve ever wanted to participate in advancing the world’s scientific knowledge, there’s never been a better time to do it.