Making A Tape Echo The Traditional Way

[Juan Nicola] has taken inspiration from the musician hackers of old and re-purposed a reel-to-reel tape recorder into a tape-echo for his guitar with a built-in valve amplifier (video in Spanish).

The principle is to record the sound of the guitar onto a piece of moving magnetic tape, then to read it back again a short time later.  This signal is mixed with the live input and re-recorded back onto the tape further back.  The effect is heard as an echo, and this approach was very popular before digital effects became readily available.

[Juan] installed a new read-head onto his Grundig TK40 and managed to find a suitable mechanical arrangement to keep it all in place.  He has since updated the project by moving to a tape loop, allowing an infinite play-time by re-using the same piece of tape over and over.

Turning tape machines into echo effects is not a new idea, and we’ve shown a few of them over the years, but every one is slightly different!

Both versions are shown after the break.  YouTube closed-caption auto-translate might come in handy here for non-Spanish speakers.

Continue reading “Making A Tape Echo The Traditional Way”

Sound And Light Play Off Acrylic And Wire In This Engaging Circuit Sculpture

It’s no secret that we really like circuit sculptures around here, and we never tire of seeing what creative ways people come up with to celebrate the components used to make a project, rather than locking them away in an enclosure. And a circuit sculpture that incorporates sound and light in its design is always a real treat to discover.

Called “cwymriad” by its designer, [Eirik Brandal], this sound sculpture incorporates all kinds of beautiful elements. The framework is made from thick pieces of acrylic, set at interesting angles to each other and in contrasting colors. The sound-generating circuit, which uses square wave outputs from an ESP32 to provide carrier and modulation signals for a dual ring modulator, is built on a framework of tinned wires. The sounds the sculpture makes have a lovely resonance to them, like random bells and chimes that fade and mix together. There’s also a matrix of white LEDs that form a sort of digital oscilloscope that displays shifting waveforms in time with the music.

While we like the way this looks and sounds, the real bonus here is the details of construction in the video below. [Eirik]’s careful craftsmanship working with multiple materials is evident throughout; we were especially impressed by the work needed to drill holes for the LED matrix, any one of which slightly out of place would have been painfully obvious in the finished product.

This is far from [Eirik]’s first appearance on these pages. His vacuum tube and silicon “ioalieia” was featured just a few weeks back, and “ddrysfeöd” used the acrylic parts as light pipes in a lovely way.

Continue reading “Sound And Light Play Off Acrylic And Wire In This Engaging Circuit Sculpture”

Google Sheet showing wins and losses of sports team. Data automated by IFTTT, Alexa, and Particle

An Overly Complicated Method Of Tracking Your Favorite Sports Team

Much of the world appears to revolve around sports, and sports tracking is a pretty big business. So how do people keep up with their favorite team? Well, [Jackson] and [Mourad] decided to devise a custom IoT solution.

Their system is a bit convoluted, so bear with us. First, they tell Alexa whether or not the team won or lost that week. Alexa then sends that information to IFTTT where two different Particle Argon boards are constantly polling the results to decide how to respond next. One Particle responds by lighting up an LED, green for a win and red for a loss. Another Particle board displays the results on an LCD screen. But this is where things get tricky. One of the more confusing aspects of their design is one of the Particle boards then signals back to IFTTT, telling it to tally the number of wins and losses. This seems a bit roundabout since the system started with IFTTT in the first place. Regardless, they seemed to be happy with the result and I’m sure they learned something in the process.

This project might not fulfill any functional need given that Alexa knows everything about all our lives already and you could just ask her how your favorite team is doing whenever you want to. But hey, we’re all about learning by doing here at Hackaday and we’re all guilty of building useless projects here and there just because we can. In any case, their project could serve as a good intro to integrating your Particle with IFTTT or Alexa since there appears to be quite a bit of probably unnecessary handshaking going on here.

Continue reading “An Overly Complicated Method Of Tracking Your Favorite Sports Team”

Dub Siren, a 555-powered synthesizer

Classic Chip Line-Up Powers This Fun Dub Siren Synth

There’s a certain elite set of chips that fall into the “cold, dead hands” category, and they tend to be parts that have proven their worth over decades, not years. Chief among these is the ubiquitous 555 timer chip, which nearly 50 years after its release still finds its way into the strangest places. Add in other silicon stalwarts like the 741 op-amp and the LM386 audio amp, and you’ve got a Hall of Fame lineup for almost any project.

That’s exactly the complement of chips that powers this fun little dub siren. As [lonesoulsurfer] explains, dub sirens started out as actual sirens from police cars and the like that were used as part of musical performances. The ear-splitting versions were eventually replaced with sampled or synthesized siren effects for recording studio and DJ use, which leads us to the current project. The video below starts with a demo, and it’s hard to believe that the diversity of sounds this box produces comes from just a pair of 555s coupled by a 741 buffer. Five pots on the main PCB control the effects, while a second commercial reverb module — modified to support echo effects too — adds depth and presence. I built-in speaker and a nice-looking wood enclosure complete the build, which honestly sounds better than any 555-based synth has a right to.

Interested in more about the chips behind this build? We’ve talked about the 555 and how it came to be, taken a look inside the 741, and gotten a lesson in LM386 loyalty.

Continue reading “Classic Chip Line-Up Powers This Fun Dub Siren Synth”

Hackaday Links Column Banner

Hackaday Links: June 6, 2021

There are a bunch of newly minted millionaires this week, after it was announced that Stack OverFlow would be acquired for $1.8 billion by European tech investment firm Prosus. While not exactly a household name, Prosus is a big player in the Chinese tech scene, where it has about a 30% stake in Chinese internet company Tencent. They trimmed their holdings in the company a bit recently, raising $15 billion in cash, which we assume will be used to fund the SO purchase. As with all such changes, there’s considerable angst out in the community about how this could impact everyone’s favorite coding help site. The SO leadership are all adamant that nothing will change, but only time will tell.

Continue reading “Hackaday Links: June 6, 2021”

Speech Recognition On An Arduino Nano?

Like most of us, [Peter] had a bit of extra time on his hands during quarantine and decided to take a look back at speech recognition technology in the 1970s. Quickly, he started thinking to himself, “Hmm…I wonder if I could do this with an Arduino Nano?” We’ve all probably had similar thoughts, but [Peter] really put his theory to the test.

The hardware itself is pretty straightforward. There is an Arduino Nano to run the speech recognition algorithm and a MAX9814 microphone amplifier to capture the voice commands. However, the beauty of [Peter’s] approach, lies in his software implementation. [Peter] has a bit of an interplay between a custom PC program he wrote and the Arduino Nano. The learning aspect of his algorithm is done on a PC, but the implementation is done in real-time on the Arduino Nano, a typical approach for really any machine learning algorithm deployed on a microcontroller. To capture sample audio commands, or utterances, [Peter] first had to optimize the Nano’s ADC so he could get sufficient sample rates for speech processing. Doing a bit of low-level programming, he achieved a sample rate of 9ksps, which is plenty fast for audio processing.

To analyze the utterances, he first divided each sample utterance into 50 ms segments. Think of dividing a single spoken word into its different syllables. Like analyzing the “se-” in “seven” separate from the “-ven.” 50 ms might be too long or too short to capture each syllable cleanly, but hopefully, that gives you a good mental picture of what [Peter’s] program is doing. He then calculated the energy of 5 different frequency bands, for every segment of every utterance. Normally that’s done using a Fourier transform, but the Nano doesn’t have enough processing power to compute the Fourier transform in real-time, so Peter tried a different approach. Instead, he implemented 5 sets of digital bandpass filters, allowing him to more easily compute the energy of the signal in each frequency band.

The energy of each frequency band for every segment is then sent to a PC where a custom-written program creates “templates” based on the sample utterances he generates. The crux of his algorithm is comparing how closely the energy of each frequency band for each utterance (and for each segment) is to the template. The PC program produces a .h file that can be compiled directly on the Nano. He uses the example of being able to recognize the numbers 0-9, but you could change those commands to “start” or “stop,” for example, if you would like to.

[Peter] admits that you can’t implement the type of speech recognition on an Arduino Nano that we’ve come to expect from those covert listening devices, but he mentions small, hands-free devices like a head-mounted multimeter could benefit from a single word or single phrase voice command. And maybe it could put your mind at ease knowing everything you say isn’t immediately getting beamed into the cloud and given to our AI overlords. Or maybe we’re all starting to get used to this. Whatever your position is on the current state of AI, hopefully, you’ve gained some inspiration for your next project.

Amazon Echo Gets Open Source Brain Transplant

There’s little debate that Amazon’s Alexa ecosystem makes it easy to add voice control to your smart home, but not everyone is thrilled with how it works. The fact that all of your commands are bounced off of Amazon’s servers instead of staying internal to the network is an absolute no-go for the more privacy minded among us, and honestly, it’s hard to blame them. The whole thing is pretty creepy when you think about it.

Which is precisely why [André Hentschel] decided to look into replacing the firmware on his Amazon Echo with an open source alternative. The Linux-powered first generation Echo had been rooted years before thanks to the diagnostic port on the bottom of the device, and there were even a few firmware images floating around out there that he could poke around in. In theory, all he had to do was remove anything that called back to the Amazon servers and replace the proprietary bits with comparable free software libraries and tools.

Taping into the Echo’s debug port.

Of course, it ended up being a little trickier than that. The original Echo is running on a 2.6.x series Linux kernel, which even for a device released in 2014, is painfully outdated. With its similarly archaic version of glibc, newer Linux software would refuse to run. [André] found that building an up-to-date filesystem image for the Echo wasn’t a problem, but getting the niche device’s hardware working on a more modern kernel was another story.

He eventually got the microphone array working, but not the onboard digital signal processor (DSP). Without the DSP, the age of the Echo’s hardware really started to show, and it was clear the seven year old smart speaker would need some help to get the job done.

The solution [André] came up with is not unlike how the device worked originally: the Echo performs wake word detection locally, but then offloads the actual speech processing to a more powerful computer. Except in this case, the other computer is on the same network and not hidden away in Amazon’s cloud. The Porcupine project provides the wake word detection, speech samples are broken down into actionable intents with voice2json, and the responses are delivered by the venerable eSpeak speech synthesizer.

As you can see in the video below the overall experience is pretty similar to stock, complete with fancy LED ring action. In fact, since Porcupine allows for multiple wake words, you could even argue that the usability has been improved. While [André] says adding support for Mycroft would be a logical expansion, his immediate goal is to get everything documented and available on the project’s GitLab repository so others can start experimenting for themselves.

Continue reading “Amazon Echo Gets Open Source Brain Transplant”