Dub Siren, a 555-powered synthesizer

Classic Chip Line-Up Powers This Fun Dub Siren Synth

There’s a certain elite set of chips that fall into the “cold, dead hands” category, and they tend to be parts that have proven their worth over decades, not years. Chief among these is the ubiquitous 555 timer chip, which nearly 50 years after its release still finds its way into the strangest places. Add in other silicon stalwarts like the 741 op-amp and the LM386 audio amp, and you’ve got a Hall of Fame lineup for almost any project.

That’s exactly the complement of chips that powers this fun little dub siren. As [lonesoulsurfer] explains, dub sirens started out as actual sirens from police cars and the like that were used as part of musical performances. The ear-splitting versions were eventually replaced with sampled or synthesized siren effects for recording studio and DJ use, which leads us to the current project. The video below starts with a demo, and it’s hard to believe that the diversity of sounds this box produces comes from just a pair of 555s coupled by a 741 buffer. Five pots on the main PCB control the effects, while a second commercial reverb module — modified to support echo effects too — adds depth and presence. I built-in speaker and a nice-looking wood enclosure complete the build, which honestly sounds better than any 555-based synth has a right to.

Interested in more about the chips behind this build? We’ve talked about the 555 and how it came to be, taken a look inside the 741, and gotten a lesson in LM386 loyalty.

Continue reading “Classic Chip Line-Up Powers This Fun Dub Siren Synth”

Hackaday Links Column Banner

Hackaday Links: June 6, 2021

There are a bunch of newly minted millionaires this week, after it was announced that Stack OverFlow would be acquired for $1.8 billion by European tech investment firm Prosus. While not exactly a household name, Prosus is a big player in the Chinese tech scene, where it has about a 30% stake in Chinese internet company Tencent. They trimmed their holdings in the company a bit recently, raising $15 billion in cash, which we assume will be used to fund the SO purchase. As with all such changes, there’s considerable angst out in the community about how this could impact everyone’s favorite coding help site. The SO leadership are all adamant that nothing will change, but only time will tell.

Continue reading “Hackaday Links: June 6, 2021”

Speech Recognition On An Arduino Nano?

Like most of us, [Peter] had a bit of extra time on his hands during quarantine and decided to take a look back at speech recognition technology in the 1970s. Quickly, he started thinking to himself, “Hmm…I wonder if I could do this with an Arduino Nano?” We’ve all probably had similar thoughts, but [Peter] really put his theory to the test.

The hardware itself is pretty straightforward. There is an Arduino Nano to run the speech recognition algorithm and a MAX9814 microphone amplifier to capture the voice commands. However, the beauty of [Peter’s] approach, lies in his software implementation. [Peter] has a bit of an interplay between a custom PC program he wrote and the Arduino Nano. The learning aspect of his algorithm is done on a PC, but the implementation is done in real-time on the Arduino Nano, a typical approach for really any machine learning algorithm deployed on a microcontroller. To capture sample audio commands, or utterances, [Peter] first had to optimize the Nano’s ADC so he could get sufficient sample rates for speech processing. Doing a bit of low-level programming, he achieved a sample rate of 9ksps, which is plenty fast for audio processing.

To analyze the utterances, he first divided each sample utterance into 50 ms segments. Think of dividing a single spoken word into its different syllables. Like analyzing the “se-” in “seven” separate from the “-ven.” 50 ms might be too long or too short to capture each syllable cleanly, but hopefully, that gives you a good mental picture of what [Peter’s] program is doing. He then calculated the energy of 5 different frequency bands, for every segment of every utterance. Normally that’s done using a Fourier transform, but the Nano doesn’t have enough processing power to compute the Fourier transform in real-time, so Peter tried a different approach. Instead, he implemented 5 sets of digital bandpass filters, allowing him to more easily compute the energy of the signal in each frequency band.

The energy of each frequency band for every segment is then sent to a PC where a custom-written program creates “templates” based on the sample utterances he generates. The crux of his algorithm is comparing how closely the energy of each frequency band for each utterance (and for each segment) is to the template. The PC program produces a .h file that can be compiled directly on the Nano. He uses the example of being able to recognize the numbers 0-9, but you could change those commands to “start” or “stop,” for example, if you would like to.

[Peter] admits that you can’t implement the type of speech recognition on an Arduino Nano that we’ve come to expect from those covert listening devices, but he mentions small, hands-free devices like a head-mounted multimeter could benefit from a single word or single phrase voice command. And maybe it could put your mind at ease knowing everything you say isn’t immediately getting beamed into the cloud and given to our AI overlords. Or maybe we’re all starting to get used to this. Whatever your position is on the current state of AI, hopefully, you’ve gained some inspiration for your next project.

Amazon Echo Gets Open Source Brain Transplant

There’s little debate that Amazon’s Alexa ecosystem makes it easy to add voice control to your smart home, but not everyone is thrilled with how it works. The fact that all of your commands are bounced off of Amazon’s servers instead of staying internal to the network is an absolute no-go for the more privacy minded among us, and honestly, it’s hard to blame them. The whole thing is pretty creepy when you think about it.

Which is precisely why [André Hentschel] decided to look into replacing the firmware on his Amazon Echo with an open source alternative. The Linux-powered first generation Echo had been rooted years before thanks to the diagnostic port on the bottom of the device, and there were even a few firmware images floating around out there that he could poke around in. In theory, all he had to do was remove anything that called back to the Amazon servers and replace the proprietary bits with comparable free software libraries and tools.

Taping into the Echo’s debug port.

Of course, it ended up being a little trickier than that. The original Echo is running on a 2.6.x series Linux kernel, which even for a device released in 2014, is painfully outdated. With its similarly archaic version of glibc, newer Linux software would refuse to run. [André] found that building an up-to-date filesystem image for the Echo wasn’t a problem, but getting the niche device’s hardware working on a more modern kernel was another story.

He eventually got the microphone array working, but not the onboard digital signal processor (DSP). Without the DSP, the age of the Echo’s hardware really started to show, and it was clear the seven year old smart speaker would need some help to get the job done.

The solution [André] came up with is not unlike how the device worked originally: the Echo performs wake word detection locally, but then offloads the actual speech processing to a more powerful computer. Except in this case, the other computer is on the same network and not hidden away in Amazon’s cloud. The Porcupine project provides the wake word detection, speech samples are broken down into actionable intents with voice2json, and the responses are delivered by the venerable eSpeak speech synthesizer.

As you can see in the video below the overall experience is pretty similar to stock, complete with fancy LED ring action. In fact, since Porcupine allows for multiple wake words, you could even argue that the usability has been improved. While [André] says adding support for Mycroft would be a logical expansion, his immediate goal is to get everything documented and available on the project’s GitLab repository so others can start experimenting for themselves.

Continue reading “Amazon Echo Gets Open Source Brain Transplant”

Racing Game Crashes Into Its Next Life As A Sound Bender

They say the best things in life are free, but we would loudly argue that a dollar can go a long way, too. It all depends on what you do with it. When [lonesoulsurfer] saw this busted-up handheld racing game at the junk store, he fell in love with the lines of the case and gladly forked over a buck in order to give it a new life as a wicked little sound-bending machine with dancing LEDs.

Here’s how it works: [lonesoulsurfer] records a few seconds of whatever into the mic with the looping function switched off, then turns it back on to start the fun. He can vary the pitch with the speed controller pot, or add in some echo and reverb. Once the sound is dialed in, he works the pause button on the left to make melodies by stopping and restarting the loop, or just pausing it momentarily depending on the switch setting.

The electronics are a mashup of modules mixed with a custom PCB that combines the recording module with an LM386 amplifier and holds the coolest part of this build — those LEDs that dance to the music behind the toy’s original lenticular screen. Like most of [lonesoulsurfer]’s builds, it’s powered by an old cell phone battery that’s buck-boosted to 5 V. Check out the build and bleep-bloop video after the break.

Lenticular lenses are all kinds of fun. Get one that’s big enough, and you can use it to disappear for a while.

Continue reading “Racing Game Crashes Into Its Next Life As A Sound Bender”

“Alexa, Stop Listening To Me Or I’ll Cut Your Ears Off”

Since we’ve started inviting them into our homes, many of us have began casting a wary eye at our smart speakers. What exactly are they doing with the constant stream of audio we generate, some of it coming from the most intimate and private of moments? Sure, the big companies behind these devices claim they’re being good, but do any of us actually buy that?

It seems like the most prudent path is to not have one of these devices, but they are pretty useful tools. So this hardware mute switch for an Amazon Echo represents a middle ground between digital Luddism and ignoring the possible privacy risks of smart speakers.  Yes, these devices all have software options for disabling their microphone arrays, but as [Andrew Peters] relates it, his concern is mainly to thwart exotic attacks on smart speakers, some of which, like laser-induced photoacoustic attacks, we’ve previously discussed. And for that job, only a hardware-level disconnect of the microphones will do.

To achieve this, [Andrew] embedded a Seeeduino Xiao inside his Echo Dot Gen 2. The tiny microcontroller grounds the common I²S data line shared by the seven (!) microphones in the smart speaker, effective disabling them. Enabling and disabling the mics is done via the existing Dot keys, with feedback provided by tones sent through the Dot speaker. It’s a really slick mod, and the amount of documentation [Andrew] did while researching this is impressive. The video below and the accompanying GitHub repo should prove invaluable to other smart speaker hackers.

Continue reading ““Alexa, Stop Listening To Me Or I’ll Cut Your Ears Off””

Is Your Echo Flex Listening?

We are always surprised that Amazon or Google doesn’t employ Kelsey Grammer — TV’s Frasier — as a spokesman for their smart home devices. After all, his catchphrase was, “I’m listening…” Maybe they don’t want to remind you that the device could, theoretically, be sending everything you say to them or a nefarious hacker or government agency. Sure, there’s a mute button and it lights up a red LED.

But if you are truly paranoid, that’s not enough. After all, the same people want to eavesdrop on you would be happy to fake a red light. [Electronupdate] had the same thought and decided to answer the question: does the mute button really mute your microphone? The answer required not only some case opening and analysis, but there was even some IC decapsulation.

We were impressed with the depth of the analysis. The tiny SMD parts are marked confusingly, and if you are really paranoid you don’t believe them anyway. But looking at the actual circuit die is pretty unambiguous. The  parts in question turned out to be a Schmitt trigger, a flip flop, and a NAND gate.

Continue reading “Is Your Echo Flex Listening?”