Analog Synth, But In Cello Form

For one reason or another, electronic synthesizing musical instruments are mostly based around the keyboard. Sure, you’ve got the theremin and other oddities, but VCAs and VCFs are mostly the domain of keyboard-style instruments, and have been for decades. That’s a shame, because the user interface of an instrument has a great deal to do with the repertoire of that instrument. Case in point: [jaromir]’s entry for the Hackaday Prize. It’s an electronic analog synth, in cello form. There’s no reason something like this couldn’t have been built in the 60s, and we’re shocked it wasn’t.

Instead of an electrified cello with a piezo on the bridge or some sort of magnetic pickup, this cello is a purely electronic instrument. The fingerboard is metal, and the strings are made of kanthal wire, the same wire that goes into wire-wound resistors. As a note is fingered, the length of the string is ‘measured’ as a value of resistance and used to control an oscillator. Yes, it’s weird, but we’re wondering why we haven’t seen anything like this before.

How does this cello sound? Remarkably like a cello. [jaromir] admits there are a few problems with the build — the fingerboard is too wide, and the fingerboard should probably be curved. That’s really an issue with the cellist, not the instrument itself, though. Seeing as how [jaromir] has never even held a cello, we’re calling this one a success. You can check out a video of this instrument playing Cello Suite No. 1 below. It actually does sound good, and there’s a lot of promise here.

Continue reading “Analog Synth, But In Cello Form”

The Portable, Digital, Visual Theremin

The theremin is, for some reason, what people think of first when they think of electronic musical instruments. Maybe that’s because it was arguably the first purely electronic musical instrument, or because there’s no mechanical analog to something that makes sound simply by waving your hand over it. This project takes that idea and cranks it up to eleven. It’s a portable synthesizer that’s controlled by IR reflectors. Just wave your hand in front of it, and that’s what pitch is going to sound.

The audio hardware for this synth is, like so many winners in the Musical Instrument Challenge in this year’s Hackaday Prize, based on the Teensy and its incredible Audio library. The code consists of two oscillators and a pink noise generator. Pressing down button one activates the oscillators, and the frequency is determined by the IR sensor. Button two cycles through various waveforms, while the third and fourth buttons shift the octaves up and down. The output is I2S, and from there everything is out to an amplifier and speaker.

Of course, it’s really not a musical instrument unless it looks cool, and that’s where this project is really great. It’s a fully 3D printed enclosure that actually looks good. There’s an 8×8 LED array to display the current waveform, and this is something that could actually be a product instead of a project. It’s a great synth, and we’re happy to have it in the running for the Hackaday Prize.

Continue reading “The Portable, Digital, Visual Theremin”

The Swiss Army Knife Of Audio Synthesis

Thirty years ago, we would be lucky if a computer could play audio. Take a computer from twenty years ago, and you’ll be lucky if it can play an MP3 in real-time. Now, computers can handle hundreds of tracks of CD-quality audio, and microcontrollers are several times more powerful than a desktop computer of the mid-90s. This means, of course, that microcontrollers can do audio very, very well. For his entry to the Hackaday Prize, [Fabien] is capitalizing on this power to create a Swiss Army knife of audio synthesis. It’s called the Noise Nugget, and it’s just what you need when you want to put audio in anything.

The microcontroller in question is an ARM Cortex-M4 running at 180MHz, with a quality DAC. There’s connectivity in the form of USB, two audio outs, one audio in, I2C, UART, and GPIOs. With this, you’ve got a digital synthesizer with a MIDI interface, audio effects for guitar pedal tomfoolery, an audio effect trigger board for playing pre-recorded sounds, a digital recorder, and a USB sound interface.

So, with all that processing power, what can the Noise Nugget actually do? Well, first of all, it’s a sampler. [Fabien] has a video demo of the Noise Nugget set up in sampler mode, where it can play a lute-ish sample and a cat sound. All of this is controlled over MIDI and played through a cheap speaker. The results — except for the cat sample — sound great. You can check that video out below.

Continue reading “The Swiss Army Knife Of Audio Synthesis”

Wavetable General MIDI For Everyone

There are only so many ways to generate music with a computer, and by far the most popular method is MIDI. It’s been around for thirty-five years, and you don’t get to be a decades-old standard for no reason. That said, turning MIDI into audio is a pain, but this project in the Musical Instrument Challenge for the Hackaday Prize makes it easy. It’s a Fluxamasynth Module that turns MIDI into something you can hear.

The key to this build is a single chip that takes MIDI data in and spits out audio, according to the 128 general MIDI sounds. This might not sound like much, but if you’ve ever tried to turn MIDI into sound, you’ll find your options are limited. There is exactly one chip that can do this and is easily obtainable: the SAM2695 from Dream Sound Synthesis. This chip was originally designed for cheap toy keyboards, but if you have a chip, you can do anything with it.

The Fluxamasynth Modules are inspired by the original Fluxamasynth, an Arduino shield that is basically a breakout board for the SAM chip. There’s a MIDI in, and an 1/8″ jack for output, and not much else. The Fluxamasynth Modules extend the capability by adding more support, including stereo output, reverb, chorus, flange, and delay effects, and digs down deep into the configurable parameters for tuning.

The hardware is basically an audio appliance for the Arduino, Raspberry Pi, and the ESP32, and allows for generative music through code. You can see an example of this project in the video below.

Continue reading “Wavetable General MIDI For Everyone”

The Ultimate MIDI Wind Controller Is The Human Voice

When it comes to music, the human voice is the most incredible instrument. From Tuvan throat singing to sopranos belting out an aria, the human vocal tract has evolved over millions of years to be the greatest musical instrument. We haven’t quite gotten to the point where we can implant autotune in our vocal cords, but this project for the Hackaday Prize aims to be a bridge between singers and instrumentalists. It’s a hands-free instrument that relies on vocal gesture sensing to drive electronic musical instruments.

The act of speaking requires dozens of muscles, and of course no device that measures how the human vocal tract is shaped will be able to measure all of them, but the Multiwind does manage to measure breathing in, breathing out, the shape of the lower lip, the upper lip, and its own tilt, giving it far more feedback than any traditional wind instrument. It does this with IMUs and a mouthpiece mounted on a mount that is seemingly inspired by one of those hands-free harmonica neck mounts.

The output for this device is MIDI, although the team behind this build already has data streaming to an instance of Max, and once you have that, you have every musical instrument imaginable. It’s an innovative musical instrument, and something we’re really excited to see the results of.

With Grinning Keyboard And Sleek Design, This Synth Shows It All

Stylish! is a wearable music synthesizer that combines slick design with stylus based operation to yield a giant trucker-style belt buckle that can pump out electronic tunes. With a PCB keyboard and LED-surrounded inset speaker that resembles an eyeball over a wide grin, Stylish! certainly has a unique look to it. Other synthesizer designs may have more functions, but certainly not more style.

The unit’s stylus and PCB key interface resemble a Stylophone, but [Tim Trzepacz] has added many sound synthesis features as well as a smooth design and LED feedback, all tied together with battery power and integrated speaker and headphone outputs. It may have been originally conceived as a belt buckle, but Stylish! certainly could give conference badge designs a run for their money.

The photo shown is a render, but a prototype is underway using a milled PCB and 3D printed case. [Tim]’s Google photo gallery has some good in-progress pictures showing the prototyping process along with some testing, and his GitHub repository holds all the design files, should anyone want a closer look under the hood. Stylish! was one of the twenty finalists selected for the Musical Instrument Challenge portion of the 2018 Hackaday Prize and is therefore one of the many projects in the running for the grand prize!

Google's Piano Genie

Piano Genie Trained A Neural Net To Play 88-Key Piano With 8 Arcade Buttons

Want to sound great on a Piano using only your coding skills? Enter Piano Genie, the result of a research project from Google AI and DeepMind. You press any of eight buttons while a neural network makes sure the piano plays something cool — compensating in real time for what’s already been played.

Almost anyone new to playing music who sits down at a piano will produce a sound similar to that of a cat chasing a mouse through a tangle of kitchen pots. Who can blame them, given the sea of 88 inexplicable keys sitting before them? But they’ll quickly realize that playing keys in succession in one direction will produce sounds with consistently increasing or decreasing pitch. They’ll also learn that pressing keys for different lengths of times can improve the melody. But there’s still 88 of them and plenty more to learn, such as which keys will sound harmonious when played together.

Piano Genie training architectureWith Pinao Genie, gone are the daunting 88 keys, replaced with a 3D-printed box of eight arcade-style buttons which they made by following this Adafruit tutorial. A neural network maps those eight buttons to something meaningful on the 88-key piano keyboard. Being a neural network, the mapping isn’t a fixed one-to-one or even one-to-many. Instead, it’s trained to play something which should sound good taking into account what was play previously and won`t necessarily be the same each time.

To train it they use data from the approximately 1400 performances of the International Piano e-Competition. The result can be quite good as you can see and hear in the video below. The buttons feed into a computer but the computer plays the result on an actual piano.

For training, the neural network really consists of two networks. One is an encoder, in this case a recurrent neural network (RNN) which takes piano sequences and learns to output a vector. In the diagram, the vector is in the middle and has one element for each of the eight buttons. The second network is the decoder, also an RNN. It’s trained to turn that eight-element vector back into the same music which was fed into the encoder.

Once trained, only the decoder is used. The eight-button keyboard feeds into the vector, and the decoder outputs suitable notes. The fact that they’re RNNs means that rather than learning a fixed one-to-many mapping, the network takes into account what was previously played in order to come up with something which hopefully sounds pleasing. To give the user a little more creative control, they also trained it to realize when the user is playing a rising or falling melody and to output the same. See their paper for how the turned polyphonic sound into monophonic and back again.

If you prefer a different style of music you can train it on a MIDI collection of your own choosing using their open-sourced model. Or you can try it out as is right now through their web interface. I’ll admit, I started out just banging on it, producing the same noise I would get if I just hammered away randomly on a piano. Then I switched to thinking of making melodies and the result started sounding better. So some music background and practice still helps. For the video below, the researcher admits to having already played for a few hours.

This isn’t the first project we’ve covered by these Google researchers. Another was this music synthesizer again using neural networks but this time with a Raspberry Pi. And if our discussion of recurrent neural networks went a bit over your head, check out our overview of neural networks.

Continue reading “Piano Genie Trained A Neural Net To Play 88-Key Piano With 8 Arcade Buttons”