Ultrasonic spirit writing

Ultrasonic Array Powers This Halloween Spirit Writer

The spooky season is upon us, and with it the race to come up with the geekiest way to scare the kids. Motion-activated jump-scare setups are always a crowd-pleaser, but kind of a cheap thrill in our opinion. So if you’re looking for something different for your Halloween scare-floor, you might consider “spirit writing” with ultrasound.

The idea that [Dan Beaven] has here is a variation on the ultrasonic levitation projects we’ve seen so many of over the last couple of years. While watching bits of styrofoam suspended in midair by the standing waves generated by carefully phased arrays of ultrasonic transducers is cool, [Dan] looks set to take the concept to the next level. Very much still a prototype, the setup has a 256-transducer matrix suspended above a dark surface. Baking powder is sprinkled over the writing surface to stand in for dust, which is easily disturbed by the sound waves reflecting off the hard surface. The array can be controlled to make it look like an unseen hand is tracing out a design in the dust, and the effect is pretty convincing. We’d have chosen “REDRUM” rather than a pentagram, but different strokes.

[Dan] obviously has a long way to go before this is ready for the big night, but the proof-of-concept is sound. While we wait for the finished product, we’ll just file this away as a technique that might have other applications. SMD components are pretty small and light, after all — perhaps an ultrasonic pick-and-place? In which case, sonic tweezers might be just the thing.

Continue reading “Ultrasonic Array Powers This Halloween Spirit Writer”

Hackaday Podcast Episode 346: Melting Metal In The Microwave, Unlocking Car Brakes And Washing Machines, And A Series Of Tubes

Wait, what? Is it time for the podcast again? Seems like only yesterday that Dan joined Elliot for the weekly rundown of the choicest hacks for the last 1/52 of a year. but here we are. We had quite a bit of news to talk about, including the winners of the Component Abuse Challenge — warning, some components were actually abused for this challenge. They’re also a trillion pages deep over at the Internet Archive, a milestone that seems worth celebrating.

As for projects, both of us kicked things off with “Right to repair”-adjacent topics, first with a washing machine that gave up its secrets with IR and then with a car that refused to let its owner fix the brakes. We heated things up with a microwave foundry capable of melting cast iron — watch your toes! — and looked at a tiny ESP32 dev board with ludicrously small components. We saw surveyors go to war, watched a Lego sorting machine go through its paces, and learned about radar by spinning up a sonar set from first principles.

Finally, we wrapped things up with another Al Williams signature “Can’t Miss Articles” section, with his deep dive into the fun hackers can have with the now-deprecated US penny, and his nostalgic look at pneumatic tube systems.

Download this 100% GMO-free MP3.

Continue reading “Hackaday Podcast Episode 346: Melting Metal In The Microwave, Unlocking Car Brakes And Washing Machines, And A Series Of Tubes”

Real-Time Beamforming With Software-Defined Radio

It is perhaps humanity’s most defining trait that we are always striving to build things better, stronger, faster, or bigger than that which came before. Taller skyscrapers, longer bridges, and computers with more processors, all advance thanks to this relentless persistence.

In the world of radio, we might assume that a better signal simply means adding more power, but performance can also improve by adding more antennas. Not only do more antennas increase gain but they can also be electronically steered, and [MAKA] demonstrates how to do this with a software-defined radio (SDR) phased array.

The project comes to us in two parts. In the first part, two ADALM-Pluto SDR modules are used, with one set to transmit and the other to receive. The transmitting SDR has two channels, one of which has the phase angle of the transmitted radio wave fixed while the other is swept from -180° to 180°. These two waves will interfere with each other at various points along this sweep, with one providing much higher gain to the receiver. This information is all provided to the user via a GUI.

The second part works a bit like the first, but in reverse. By using the two antennas as receivers instead of transmitters, the phased array can calculate the precise angle of arrival of a particular radio wave, allowing the user to pinpoint the direction it is being transmitted from. These principles form the basis of things like phased array radar, and if you’d like more visual representations of how these systems work take a look at this post from a few years ago.

Continue reading “Real-Time Beamforming With Software-Defined Radio”

Supercon 2024: Killing Mosquitoes With Freaking Drones, And Sonar

Suppose that you want to get rid of a whole lot of mosquitoes with a quadcopter drone by chopping them up in the rotor blades. If you had really good eyesight and pretty amazing piloting skills, you could maybe fly the drone yourself, but honestly this looks like it should be automated. [Alex Toussaint] took us on a tour of how far he has gotten toward that goal in his amazingly broad-ranging 2024 Superconference talk. (Embedded below.)

The end result is an amazing 380-element phased sonar array that allows him to detect the location of mosquitoes in mid-air, identifying them by their particular micro-doppler return signature. It’s an amazing gadget called LeSonar2, that he has open-sourced, and that doubtless has many other applications at the tweak of an algorithm.

Rolling back in time a little bit, the talk starts off with [Alex]’s thoughts about self-guiding drones in general. For obstacle avoidance, you might think of using a camera, but they can be heavy and require a lot of expensive computation. [Alex] favored ultrasonic range finding. But then an array of ultrasonic range finders could locate smaller objects and more precisely than the single ranger that you probably have in mind. This got [Alex] into beamforming and he built an early prototype, which we’ve actually covered in the past. If you’re into this sort of thing, the talk contains a very nice description of the necessary DSP.

[Alex]’s big breakthrough, though, came with shrinking down the ultrasonic receivers. The angular resolution that you can resolve with a beam-forming array is limited by the distance between the microphone elements, and traditional ultrasonic devices like we use in cars are kinda bulky. So here comes a hack: the TDK T3902 MEMS microphones work just fine up into the ultrasound range, even though they’re designed for human hearing. Combining 380 of these in a very tightly packed array, and pushing all of their parallel data into an FPGA for computation, lead to the LeSonar2. Bigger transducers put out ultrasound pulses, the FPGA does some very intense filtering and combining of the output of each microphone, and the resulting 3D range data is sent out over USB.

After a marvelous demo of the device, we get to the end-game application: finding and identifying mosquitoes in mid-air. If you don’t want to kill flies, wasps, bees, or other useful pollinators while eradicating the tiny little bloodsuckers that are the drone’s target, you need to be able to not only locate bugs, but discriminate mosquitoes from the others.

For this, he uses the micro-doppler signatures that the different wing beats of the various insects put out. Wasps have a very wide-band doppler echo – their relatively long and thin wings are moving slower at the roots than at the tips. Flies, on the other hand, have stubbier wings, and emit a tighter echo signal. The mosquito signal is even tighter.

If you told us that you could use sonar to detect mosquitoes at a distance of a few meters, much less locate them and differentiate them from their other insect brethren, we would have thought that it was impossible. But [Alex] and his team are building these devices, and you can even build one yourself if you want. So watch the talk, learn about phased arrays, and start daydreaming about what you would use something like this for.

Continue reading “Supercon 2024: Killing Mosquitoes With Freaking Drones, And Sonar”

Octet Of ESP32s Lets You See WiFi Like Never Before

Most of us see the world in a very narrow band of the EM spectrum. Sure, there are people with a genetic quirk that extends the range a bit into the UV, but it’s a ROYGBIV world for most of us. Unless, of course, you have something like this ESP32 antenna array, which gives you an augmented reality view of the WiFi world.

According to [Jeija], “ESPARGOS” consists of an antenna array board and a controller board. The antenna array has eight ESP32-S2FH4 microcontrollers and eight 2.4 GHz WiFi patch antennas spaced a half-wavelength apart in two dimensions. The ESP32s extract channel state information (CSI) from each packet they receive, sending it on to the controller board where another ESP32 streams them over Ethernet while providing the clock and phase reference signals needed to make the phased array work. This gives you all the information you need to calculate where a signal is coming from and how strong it is, which is used to plot a sort of heat map to overlay on a webcam image of the same scene.

The results are pretty cool. Walking through the field of view of the array, [Jeija]’s smartphone shines like a lantern, with very little perceptible lag between the WiFi and the visible light images. He’s also able to demonstrate reflection off metallic surfaces, penetration through the wall from the next room, and even outdoor scenes where the array shows how different surfaces reflect the signal. There’s also a demonstration of using multiple arrays to determine angle and time delay of arrival of a signal to precisely locate a moving WiFi source. It’s a little like a reverse LORAN system, albeit indoors and at a much shorter wavelength.

There’s a lot in this video and the accompanying documentation to unpack. We haven’t even gotten to the really cool stuff like using machine learning to see around corners by measuring reflected WiFi signals. ESPARGOS looks like it could be a really valuable tool across a lot of domains, and a heck of a lot of fun to play with too.

Continue reading “Octet Of ESP32s Lets You See WiFi Like Never Before”

2024 Supercon: Third Round Of Super Speakers

The third and final round of the 2024 Supercon talks announcements brings us to the end, and the full schedule is now up on Hackaday.io.

With Supercon just a couple weeks away, we hope you have your tickets already! Stay tuned tomorrow for a badge reveal.

Continue reading “2024 Supercon: Third Round Of Super Speakers”

A3 Audio: The Open Source 3D Audio Control System

Sometimes, startups fail due to technical problems or a lack of interest from potential investors and fail to gain development traction. This latter case appears to be the issue befalling A3 Audio. So, the developers have done the next best thing, made the project open source, and are actively looking for more people to pitch in. So what is it? The project is centered around the idea of spatial audio or 3D audio. The system allows ‘audio motion’ to be captured, mixed and replayed, all the while synchronized to the music. At least that’s as much as we can figure out from the documentation!

The system is made up of three main pieces of hardware. The first part is the core (or server), which is essentially a Linux PC running an OSC (Open Sound Control) server. The second part is a ‘motion sampler’, which inputs motion into the server. Lastly, there is a Mixer, which communicates using the OSC protocol (over Ethernet) to allow pre-mixing of spatial samples and deployment of samples onto the audio outputs. In addition to its core duties, the ‘core’ also manages effects and speaker handling.

The motion module is based around a Raspberry Pi 4 and a Teensy microcontroller, with a 7-inch touchscreen display for user input and oodles of NeoPixels for blinky feedback on the button matrix. The mixer module seems simpler, using just a Teensy for interfacing the UI components.

We don’t see many 3D audio projects, but this neat implementation of a beam-forming microphone phased array sure looks interesting.