DSP Spreadsheet: The Goertzel Algorithm Is Fourier’s Simpler Cousin

You probably have at least a nodding familiarity with the Fourier transform, a mathematical process for transforming a time-domain signal into a frequency domain signal. In particular, for computers, we don’t really have a nice equation so we use the discrete version of the transform which takes a series of measurements at regular intervals. If you need to understand the entire frequency spectrum of a signal or you want to filter portions of the signal, this is definitely the tool for the job. However, sometimes it is more than you need.

For example, consider tuning a guitar string. You only need to know if one frequency is present or if it isn’t. If you are decoding TouchTones, you only need to know if two of eight frequencies are present. You don’t care about anything else.

A Fourier transform can do either of those jobs. But if you go that route you are going to do a lot of math to compute things you don’t care about just so you can pick out the one or two pieces you do care about. That’s the idea behind the Goertzel. It is essentially a fast Fourier transform algorithm stripped down to compute just one frequency band of interest.  The math is much easier and you can usually implement it faster and smaller than a full transform, even on small CPUs.

Continue reading “DSP Spreadsheet: The Goertzel Algorithm Is Fourier’s Simpler Cousin”

Raspberry Pi Takes Control Of Ham Radio

Today’s ham radio gear often has a facility for remote control, but they most often talk to a computer, not the operator. Hambone, on the other hand, acts like a ham radio robot, decoding TouchTone digits and taking action — for example, keying the radio and reading off the weather — in response to the commands received.

The code is in Python and uses numpy’s fast Fourier transform to identify digits. We’d be interested to test the performance of that compared to doing a Goertzel to specifically probe for the 8 digit tones: there are four row tones and four column tones. On the other hand, the FFT is handy and clearly works fast enough for this application.

Continue reading “Raspberry Pi Takes Control Of Ham Radio”

Name That Unknown RF Signal With A Little FFT Magic

Time was once that the amateur radio bands were an aurally predictable place. Spinning the dial up and down the bands, one heard familiar sounds – the staccato of Morse, the [Donald Duck] of sideband voice transmissions, and the occasional flute-like warble of radioteletype signals. Now, the ham bands are full of exotic signals encoding all manner of digital signals, each one with a unique sound and unique demodulation needs. What’s a ham to do?

Help is on the way. [José Carlos Rueda] has made progress toward automatically classifying unknown signals by modifying a Shazam-like app. Shazam is a popular smartphone app that listens to a few seconds of a song, creates an audio fingerprint of it, and searches a massive database of songs for a match. [Rueda] used a homebrew version of the app to search a SQL-lite database of audio fingerprints populated not with a playlist of popular music, but with samples from every known signal type in the Signal Identification Wiki. The database contains hashes for an FFT of each sample, which can be easily searched. With a five to ten second sample of a signal, captured either live over a microphone or from a recording,  he is able to identify the signal automatically.

Whether it be the weird, dissonant wail of PSK-31 or the angry buzzing of PACTOR, the goings-on across the bands no longer have to remain a mystery. We really like the idea here, and wonder if it can be expanded upon to visually decode signals based on their waterfall signatures using TensorFlow. There are some waterfall examples in [Danie Conradie]’s excellent article on RF modulation that could get you started.

[via RTL-SDR.com]

Analyzing CNC Tool Chatter With Audacity

When you’re operating a machine that’s powerful enough to tear a solid metal block to shards, it pays to be attentive to details. The angular momentum of the spindle of a modern CNC machine can be trouble if it gets unleashed the wrong way, which is why generations of machinists have developed an ear for the telltale sign of impending doom: chatter.

To help develop that ear, [Zachary Tong] did a spectral analysis of the sounds of his new CNC machine during its “first chip” outing. The benchtop machine is no slouch – an Avid Pro 2436 with a 3 hp S30C tool-changing spindle. But like any benchtop machine, it lacks the sheer mass needed to reduce vibration, and tool chatter can be a problem.

The analysis begins at about the 5:13 mark in the video below, where [Zach] fed the soundtrack of his video into Audacity. Switching from waveform to spectrogram mode, he was able to identify a strong signal at about 5,000 Hz, corresponding to the spindle coming up to speed. The white noise of the mist cooling system was clearly visible too, as were harmonic vibrations up and down the spectrum. Most interesting, though, was the slight dip in frequency during the cut, indicating loading on the spindle. [Zach] then analyzed the data from the cut in the frequency domain and found the expected spindle harmonics, as well the harmonics from the three flutes on the tool. Mixed in among these were spikes indicating chatter – nothing major, but still enough to measure.

Audacity has turned out to be an incredibly useful tool with a broad range of applications. Whether it be finding bats, dumping ROMs, detecting lightning strikes, or cloning remote controls, Audacity is often the hacker’s tool of choice.

Continue reading “Analyzing CNC Tool Chatter With Audacity”

Additive, Multi-Voice Synth Preserves Sounds, Too

For his final project in [Bruce Land]’s microcontroller design class, [Mark] set out to make a decently-sized synth that sounds good. We think you’ll agree that he succeeded in spades. Don’t let those tiny buttons fool you, because it doesn’t sound like a toy.

Why does it sound so good? One of the reasons is that the instrument samples are made using additive synthesis, which essentially stacks harmonic overtones on top the fundamental frequency of each note. This allows synthesizers to better mimic the timbre of natural, acoustic sounds. For each note [Mark] plays, you’re hearing a blend of four frequencies constructed from lookup tables. These frequencies are shaped by an envelope function that improves the sound even further.

Between the sound and the features, this is quite an impressive synth. It can play polyphonically in piano, organ, or plucked string mode through a range of octaves. A PIC32 runs the synthesizer itself, and a pair of helper PIC32s can be used to record songs to be played over. So [Mark] could record point and counterpoint separately and play them back together, or use the helper PICs to fine-tune his three-part harmony. We’ve got this thing plugged in and waiting for you after the break.

If PICs aren’t what you normally choose, here’s an FPGA synth.

Continue reading “Additive, Multi-Voice Synth Preserves Sounds, Too”

Turning Sounds From A Flute Into Sheet Music

Composing music can be quite difficult – after all, you have to keep in mind all of the elements of musical theory, from time signature and key signature to the correct length for all of the notes. A team of students from Cornell University’s Designing with Microcontrollers class developed a solution for this problem by transcribing sounds from a flute into sheet music.

The project doesn’t simply detect the notes played – it is able to convert the raw audio into a standardized music score complete with accurate note timings and beats per minute. Before transcribing the music, some audio processing was necessary. The team chose to use a Sallen-Key filter to amplify the raw audio input due to its complex conjugate poles. They then used a fast Fourier Transform (FFT) to determine the frequency for the input note, converting the signal from the time domain to the frequency domain.

The algorithm samples the data to generate an input signal, using the ADC on the microcontroller to receive input from the microphone. It takes the real and imaginary components of the sampled signals and outputs a pair of real and imaginary amplitude components corresponding to the sampled frequency, evenly spaced from 0 to the Nyquist rate (half the sampling rate). The spacing of these bins and the bin with the largest amplitude are used to convert the signal back to a real frequency and a MIDI note.

The system uses a PIC32 for the logic. The circuitry for the microphone amplification uses a non-inverting op-amp with a gain of 50 to increase the microphone output signal amplitude from 15 mV to 750 mV to use by the microcontroller’s ADC. The signal is then sent to the anti-aliasing Sallen-Key filter, with a pole at 2.5 kHz and a Q of 1. The frequency was chosen since the FFT samples at 8 kHz and the frequency corresponds to a note out of the range of a flute. As for the filters, only the low pass filter was implemented in hardware.  While a bandpass filter could have been implemented in hardware, the team decided on a cleaner software approach.

The project is well-documented on the team’s project page, and it’s certainly worth checking out for more detailed discussions on the keypad controls and the software side of the audio processing. If you want to learn more about the FFT, check out this 2016 Hackaday Prize entry for an FFT spectrum analyezer.

Continue reading “Turning Sounds From A Flute Into Sheet Music”

Sara Adkins Is Jamming Out With Machines

Asking machines to make music by themselves is kind of a strange notion. They’re machines, after all. They don’t feel happy or hurt, and as far as we know, they don’t long for the affections of other machines. Humans like to think of music as being a strictly human thing, a passionate undertaking so nuanced and emotion-based that a machine could never begin to understand the feeling that goes into the process of making music, or even the simple enjoyment of it.

The idea of humans and machines having a jam session together is even stranger. But oddly enough, the principles of the jam session may be exactly what machines need to begin to understand musical expression. As Sara Adkins explains in her enlightening 2019 Hackaday Superconference talk, Creating with the Machine, humans and machines have a lot to learn from each other.

To a human musician, a machine’s speed and accuracy are enviable. So is its ability to make instant transitions between notes and chords. Humans are slow to learn these transitions and have to practice going back and forth repeatedly to build muscle memory. If the machine were capable, it would likely envy the human in terms of passionate performance and musical expression.

Continue reading “Sara Adkins Is Jamming Out With Machines”