We don’t usually speculate on the true identity of the hackers behind these projects, but when [TN666]’s accoustic drone-detector crossed our desk with the name “Batear”, we couldn’t help but wonder– is that you, Bruce? On the other hand, with a BOM consisting entirely of one ESP32-S3 and an ICS-43434 I2S microphone, this isn’t exactly going to require the Wayne fortune to pull off. Indeed, [TN666] estimates a project cost of only 15 USD, which really democratizes drone detection.

The key is what you might call ‘retrovation’– innovation by looking backwards. Most drone detection schema are looking to the ways we search for larger aircraft, and use RADAR. Before RADAR there were acoustic detectors, like the famous Japanese “war tubas” that went viral many years ago. RADAR modules aren’t cheap, but MEMS microphones are– and drones, especially quad-copters, aren’t exactly quiet. [TN666] thus made the choice to use acoustic detection in order to democratize drone detection.
Of course that’s not much good if the ESP32 is phoning home to some Azure or AWS server to get the acoustic data processed by some giant machine learning model. That would be the easy thing to do with an ESP32, but if you’re under drone attack or surveillance it’s not likely you want to rely on the cloud. There are always privacy concerns with using other people’s hardware, too. [TN666] again reached backwards to a more traditional algorithmic approach– specifically Goertzel filters to detect the acoustic frequencies used by drones. For analyzing specific frequency buckets, the Goertzel algorithm is as light as they come– which means everything can run local on the ESP32. They call that “edge computing” these days, but we just call it common sense.
The downside is that, since we’re just listening at specific frequencies, environmental noise can be an issue. Calibration for a given environment is suggested, as is a foam sock on the microphone to avoid false positives due to wind noise. It occurs to us the sort physical amplifier used in those ‘war tubas’ would both shelter the microphone from wind, as well as increase range and directionality.
[TN] does intend to explore machine learning models for this hardware as well; he seems to think that an ESP32-NN or small TensorFlow Lite model might outdo the Goertzel algorithm. He might be onto something, but we’re cheering for Goertzel on that one, simply on the basis that it’s a more elegant solution, one we’ve dived into before. It even works on the ATtiny85, which isn’t something you can say about even the lightest TensorFlow model.
Thanks to [TN] for the tip. Playboy billionaire or not, you can send your projects into the tips line to see them some bat-time on this bat-channel.

In WW1, they spread multiple mics out to get triangulation on artillery, I’m sure the same could work here:
https://en.wikipedia.org/wiki/Artillery_sound_ranging
Ukraine is doing just that right now with cell phone hardware on poles.
Thanks so much for the awesome write-up, Tyler! I can confirm I am not Bruce Wayne, mostly because my R&D budget is capped at $15. 😂
I absolutely love the “retrovation” concept you mentioned. The Goertzel algorithm is doing a fantastic job on the bench isolating those rotor harmonics locally, without needing an RTOS or massive memory overhead. And the suggestion to use a foam sock for wind noise is spot on—environmental noise is exactly the biggest challenge right now.
As mentioned in the article, Batear is currently a solid flashable baseline, but I desperately need the community’s help with real-world outdoor testing. If any of you have an ESP32, an I2S mic, and a micro-drone (DJI, FPV), please come say hi on the GitHub repo! PRs for noise filtering, environmental threshold calibration, or even TinyML explorations are highly welcome.
Let’s build a world where no one needs to fear the sky. 🦇
I wonder how this fairs at detecting the figure 8 style drone propellers.
Perhaps the edge geometry could be adjusted to add other frequencies to confuse an automated detection system.
You could run pairs of props at different speeds to change the audio profile rather easily. Indeed, just use different props on each arm, and the software will sort out running them at different speeds, giving different notes.
If you put a different prop on each motor, and possibly even a different motor on each arm, you’d get a very different audio profile, and the software wouldn’t care – the drone would still fly.
I would rather see a full fft, higher sample rates, and any analysis as to why we should expect this to work. I get it, someone wanted to vibe code something. It’s fun I guess. I see no statistical analysis on it or real world testing. It ranks higher for me than the poor soul who was detecting chemical weapons that didn’t exist due to chatgpt psychosis. But only by a small margin, mostly because I don’t feel compelled to call a help line for the creator…
@TN666, does it handle different drones with different RPM (frequency) ranges? For example small RC helicopters (comparable to DJI Mini) typically operate at 5-6kRPM (80-100Hz) and can be flown with lower head speeds, down to ~4kRPM (~67Hz) and would typically cause additional signal at 2x base frequency as blades pass over the tail boom. Tail rotor spins few times faster but is much smaller and lightly loaded.
Bigger helis run even lower head speeds often in 1-2kRPM range.
With two mics, you could run a bearing indicator on it, (BTR/Bearing time record), or you could do 3+ and do full 3D triangulation, not sure if the cores could manage that, but possibly 2 mic to bearing it could do.
or use this algorithm as the primary detector, and fire up a bearing algorithm when you detect something?
But the power in it is not the cheap price by itself, but that it can be potentially be deployed at mass scale. Once you have a lot of these systems reporting bearings for drones they can hear, you can build a very cheap drone ‘radar’.
You don’t have to go to WWII, Ukraine has a large acoustic sensor network for drone detection and publishes a map of incoming drone attack routes daily. You will get a better result with FFT which the ESP32 is capable of, has libraries for and will be more efficient computationally for more than about 10-15%. of the DFT range. You need the range because the blade passing frequency is likely to be the strongest signal and that depends on rotor speed, number of blades etc and is likely to be in the range 100-500Hz. Other frequencies will also be significant in drone identification and some people claim to be able to identify even the drone model by acoustic analysis.
How much coverage can someone realistically get without either littering an entire country with mcus or using serious microphones.
I completely agree with you though. Full fft or bust. I think the premise this was made on is flawed. Though I guess we are supposed to admire how little effort went into making it.
Always wondered:
Did the horn and reflector based acoustic aircraft and dirigible detectors used with the human ear have some kind of protection of the ear to block loud noises?
Grok 3 AI:
No, these horn- and reflector-based acoustic aircraft and dirigible detectors (often called sound locators, war tubas, or acoustic mirrors) did not incorporate dedicated protection to block or attenuate loud noises for the operator’s ears.
Why are you wasting our time/space by posting shitty ‘AI’ answers in a comment section.
You/we still have no idea if the answer is true.
Thanks for making the whole world just a tiny bit worse by feeding the ‘AI’ hype instead of shunning it.
Great Job!
Reminded me of this one: https://hackaday.com/2017/02/10/acoustic-mirrors-how-to-find-planes-without-radar/
The first man used to piss on there arm and wave it in the air, it would tell them if birds were in the air…Im sure the same could work here🤔