Playing Rock, Paper Scissors With A Time Of Flight Sensor

You can do all kinds of wonderful things with cameras and image recognition. However, sometimes spatial data is useful, too. As [madmcu] demonstrates, you can use depth data from a time-of-flight sensor for gesture recognition, as seen in this rock-paper-scissors demo.

If you’re unfamiliar with time-of-flight sensors, they’re easy enough to understand. They measure distance by determining the time it takes photons to travel from one place to another. For example, by shooting out light from the sensor and measuring how long it takes to bounce back, the sensor can determine how far away an object is. Take an array of time-of-flight measurements, and you can get simple spatial data for further analysis.

The build uses an Arduino Uno R4 Minima, paired with a demo board for the VL53L5CX time-of-flight sensor. The software is developed using NanoEdge AI Studio. In a basic sense, the system uses a machine learning model to classify data captured by the time-of-flight sensor into gestures matching rock, paper, or scissors—or nothing, if no hand is present. If you don’t find [madmcu]’s tutorial enough, you can take a look at the original version from STMicroelectronics, too.

It takes some training, and it only works in the right lighting conditions, but this is a functional system that can determine real hand sign and play the game. We’ve seen similar techniques help more advanced robots cheat at this game before, too! What a time to be alive.

Laser Theremin Turns Your Hand Swooshes Into Music

In a world where smartphones have commoditized precision MEMS Sensors, the stage is set to reimagine clusters of these sensors as something totally different. That’s exactly what [chronopoulos] did, taking four proximity sensors and turning them into a custom gesture input sensor for sound generation. The result is Quadrant, a repurposable human-interface device that proves to be well-posed at detecting hand gestures and turning them into music.

At its core, Quadrant is a human interface device built around an STM32F0 and four VL6180X time-of-flight proximity sensors. The idea is to stream the measured distance data over as fast as possible from the device side and then transform it into musical interactions on the PC side. Computing distance takes some time, though, so [chronopoulos] does a pipelined read of the array to stream the data into the PC over USB at a respectable 30 Hz.

With the data collected on the PC side, there’s a spread of interactions that are possible. Want a laser harp? No problem, as [chronopoulos] shows how you can “pluck” the virtual strings. How about an orientation sensor? Simply spread your hand over the array and change the angle. Finally, four sensors will also let you detect sweeping gestures that pass over the array, like the swoosh of your hand from one side to the other. To get a sense of these interactions, jump to the video demos at the 2:15 mark after the break.

If you’re curious to dig into the project’s inner workings, [chronopoulos] has kindly put the firmware, schematics, and layout files on Github with a generous MIT License. He’s even released a companion paper [PDF] that details the math behind detecting these gestures. And finally, if you just want to cut to the chase and make music of your own, you can actually snag this one on Tindie too.

MEMs sensors are living a great second life outside our phones these days, and this project is another testament to the richness they offer for new project ideas. For more MEMs-sensor-based projects, have a look at this self-balancing robot and magic wand.

Continue reading “Laser Theremin Turns Your Hand Swooshes Into Music”

Lidar House Looks Good, Looks All Around

A lighthouse beams light out to make itself and its shoreline visible. [Daniel’s] lighthouse has the opposite function, using lasers to map out the area around itself. Using an Arduino and a ToF sensor, the concept is relatively simple. However, connecting to something that rotates 360 degrees is always a challenge.

The lighthouse is inexpensive — about $40 — and small. Small enough, in fact, to mount on top of a robot, which would give you great situational awareness on a robot big enough to support it. You can see the device in action in the video below. Continue reading “Lidar House Looks Good, Looks All Around”

A Tongue Operated Human Machine Interface

For interfacing with machines, most of us use our hands and fingers. When you don’t have use of your hands (permanently or temporarily), there are limited alternatives. [Dorothee Clasen] has added one more option, [In]Brace, which is basically a small slide switch that you can operate with your tongue.

[In]Brace consists of a custom moulded retainer for the roof of your mouth, on which is a small ball with an embedded magnet, that slides long wire tracks. Above the track is a set of three magnetic sensors, that can detect the position of the ball. On the prototype, a wire from the three sensors run out of the corner of the users mouth, to a wireless microcontroller (Which looks to us like a ESP8266) hooked behind the user’s ear. In a final product, it would obviously be preferable if everything were sealed in the retainer. We think there is even more potential if one of the many 3-axis hall effect sensors are used, with a small joystick of rolling ball. The device could be used by disabled persons, for physical therapy, or just for cases where a person’s hands are otherwise occupied. [Dorothy] created a simple demonstration, where she plays Pong, or Tong in this case, using only the [In]Brace. Hygiene and making sure that it doesn’t somehow become a choke hazard will be very important if this ever became a product, but we think there is some potential.

[Kristina Panos] did a very interesting deep dive into the tongue as an HMI device a while ago, so this isn’t a new idea, but the actual implementations differ quite a lot. Apparently it’s also possible to use your ear muscles as an interface!

Thanks for the tip [Itay]!

LIDAR System Isn’t Just A Rangefinder Anymore

For any project there’s typically a trade-off between quality and cost,as higher quality parts, more features, or any number of aspects of a project can drive its price up. It seems as though [iliasam] has managed to avoid this paradigm entirely with his project. His new LIDAR system knocks it out of the park on accuracy, sampling, and quality, and somehow manages to only cost around $114 in parts.

A LIDAR system works by sending out many pulses of light in different directions, measuring the reflections of that light as it returns. LIDAR systems therefore improve with higher frequency pulses and faster control electronics for both the laser output and the receiving data. This system manages to be accurate to within a few centimeters and works up to 25 meters all while operating at 15 scans per second. The key was a high-powered laser module which can output up to 75 watts for extremely short times. More details can be found at this page (Google Translate from Russian).

Another bonus from this project is that [iliasam] has made everything available from his GitHub page including hardware specifications, so as long as you have a 3D printer this won’t take long to produce either. There’s even detailed breakdowns of how the laser driving circuitry works, and how there are safety features built in to keep anyone’s vision from accidentally getting damaged. Needless to say, this isn’t just a laser rangefinder module but if you want to see how you can repurpose those, [iliasam] can show you that as well.

Inputs Of Interest: Tongues For Technology

Welcome to the first installment of Inputs of Interest. In this column, we’re going to take a look at various input devices and methods, discuss their merits, give their downsides a rundown, and pontificate about the possibilities they present for hackers. I’ll leave it open to the possibility of spotlighting one particular device (because I already have one in mind), but most often the column will focus on input concepts.

A mouth mouse can help you get your input issues licked. Via @merchusey on Unsplash

Some inputs are built for having fun. Some are ultra-specific shortcuts designed to do work. Others are assistive devices for people with low mobility. And many inputs blur the lines between these three ideas. This time on Inputs of Interest, we’re going to chew on the idea of oral inputs — those driven by the user’s tongue, teeth, or both.

Unless you’ve recently bitten it, burned it, or had it pierced, you probably don’t think much about your tongue. But the tongue is a strong, multi-muscled organ that rarely gets tired. It’s connected to the brain by a cranial nerve, and usually remains undamaged in people who are paralyzed from the neck down. This makes it a viable input-driving option for almost everyone, regardless of ability. And yet, tongues and mouths in general seem to be under-utilized as input appendages.

Ideally, any input device should be affordable and/or open source, regardless of the driving appendage. Whether the user is otherwise able-bodied or isn’t, there’s no reason the device shouldn’t be as useful and beautiful as possible.

Continue reading “Inputs Of Interest: Tongues For Technology”

Lighting The Way For The Visually Impaired

The latest creation from Bengali roboticist [nabilphysics] might sound familiar. His laser-augmented glove gives users the ability to detect objects horizontally in front of them, much like a cane or pole is used by the visually impaired to navigate through a physical space.

As a stand in for the physical cane, he uses the VL53L0X time-of-flight (TOF) sensor which detects the time taken for a laser source to bounce back to the sensor. Theses are much more accurate than IR distance sensors and have a much finer focus than ultrasonic sensors for excellent directionality.

While the sensors can succumb to interferences from background light or other time-of-flight sensors, the main advantages are speed of calculation (it relies on a single shot to compute the distances within a scene) and an efficient distance algorithm that simplifies the measurement of distance data. In contrast to stereo vision, which requires complex correlation algorithms, the process for extracting information for a time-of-flight sensor is entirely direct, requiring a small amount of processing power.

The glove delivers haptic feedback to the user to determine if an object is in their way. The feedback is controlled through an Arduino Pro Mini, powered remotely by a LiPo battery. The code is uploaded to the Arduino from an FTDI adapter, and works by taking continuous readings from the time-of-flight sensor and determining if the object in front is within 450 millimeters of the glove, at which point it triggers the vibration motor to alert the user of the object’s presence.

Since the glove used for the project is a bicycle glove, the form factor is straightforward — the Arduino, motor, battery, and switch are all located inside a plastic box on the top of the glove, while the time-of-flight sensor sticks out to make continuous measurements when the glove is switched on.

In general, the setup is fairly simple, but the idea of using a time-of-flight sensor rather than an IR or sonar sensor is interesting. In the broader usage of sensors, LIDARs are already the de facto sensor used for autonomous vehicles and robotic components that rely on distance sensing. This three-dimensional data wouldn’t be much use here and this sensor works without mechanical moving parts since it doesn’t rely on the point-by-point scan from a laser beam that LIDAR systems use.