Dittytoy recreation of Jean-Michel Jarre's Oxygene Part IV

Generative Music Created In Minimalistic Javascript Code

Dittytoy user [srtuss] has recreated one of the most influential works of electronic music in an elegant nineteen kilobytes of Javascript code. The recreation of Jean-Michel Jarre’s Oxygene Part IV on the Dittytoy platform, currently in beta, plays live right in your browser. Dittytoy empowers users to create generative music online using a simple Javascript API. Syntax of the API is loosely based on that of Sonic Pi, a code-based music creation and performance tool.

“Oxygene (Part IV)” was recorded by Jean-Michel Jarre in 1976. It was Jarre’s most successful single, charted on the top ten in several countries, and was more recently featured in the Grand Theft Auto IV video game. In the 1990s, famed electronic music innovator Brian Eno used the term “generative music” to describe music generated by an electronic system comprising ever-changing elements that may be algorithmic or random.

Recreation of Jarre’s work required modeling the Korg Minipops 7 drum machine, one of the instruments presented in our slew of open-source synthesizers.

Sweet Streams Are Made Of These: Creating Music On The Command Line

There are countless ways to create music. In the simplest form, it won’t even require any equipment, as evidenced by beatboxing or a capella. If we move to the computer, it’s pretty much the same situation: audio programming languages have been around for as long as general-purpose high-level languages, and sound synthesis software along with them. And just as with physical equipment, none of that is particularly necessary thanks to sed. Yes, the sed, the good old stream editor, as [laserbat] shows in her music generating script.

Providing both a minified and fully commented version of Bach’s Prelude 1 in C major as example, [laserbat] uses a string representation of the sheet music as the script’s starting point, along with a look-up table of each transformed note’s wavelength. From here, she generates fixed length PCM square wave signals of each of the notes, to be piped as-is to the sound card via ALSA’s aplay or SoX’s play. To keep things simple enough, she stays within the region of printable characters here, using space and tilde as low and high values respectively, providing highest possible volume at the same time this way.

The concept itself is of course nothing new, it’s how .au and .wav files work, as well as these little C lines. And while the fixed note duration takes away some of the smoothness in [laserbat]’s version, adding variable duration might just be a hint too much for a sed implementation, although we’ve certainly seen some more complex scripts in the past.

[via r/programming]

Universal music translation network

Facebook’s Universal Music Translator

Star Trek has its universal language translator and now researchers from Facebook Artificial Intelligence Research (FAIR) has developed a universal music translator. Much of it is based on Google’s WaveNet, a version of which was also used in the recently announced Google Duplex AI.

Universal music translator architectureThe inspiration for it came from the human ability to hear music played by any instrument and to then be able to whistle or hum it, thereby translating it from one instrument to another. This is something computers have had trouble doing well, until now. The researchers fed their translator a string quartet playing Haydn and had it translate the music to a chorus and orchestra singing and playing in the style of Bach. They’ve even fed it someone whistling the theme from Indiana Jones and had it translate the tune to a symphony in the style of Mozart.

Shown here is the architecture of their network. Note that all the different music is fed into the same encoder network but each instrument which that music can be translated into has its own decoder network. It was implemented in PyTorch and trained using eight Tesla V100 GPUs over a total of six days. Efforts were made during training to ensure that the encoder extracted high-level semantic features from the music fed into it rather than just memorizing the music. More details can be found in their paper.

So if you want to hear how an electric guitar played in the style of Metallica might have been translated to the piano by Beethoven then listen to the samples in the video below.

Continue reading “Facebook’s Universal Music Translator”

Synthesizing Strings On A Cyclone V

Cornell students [Erissa Irani], [Albert Xu], and [Sophia Yan] built a FPGA wave equation music synth as the final project for [Bruce Land]’s ECE 5760 class.

The team used the Kaplus-Strong string synthesis method to design a trio of four-stringed instruments to be played by the Cyclone V FPGA. A C program running on the development board’s ARM 9 HPS serves as music sequencer, controlling tempo and telling the FPGA which note to play.

The students created versions of four songs, including “Colors of the Wind” from the Pocahantas soundtrack, “Far Above Cayuga’s Waters” (Cornell’s alma mater) and John Legend’s “All of Me”. A simple GUI allows the viewer to select a song and to choose which instrument or instruments to play, providing multiple variations for each song.

Thanks, [Bruce]!

Continue reading “Synthesizing Strings On A Cyclone V”

The 35 Year Music Synthesizer That Spawned Chiptune

If you are a certain age, MOS6581 either means nothing to you, or it is a track from Carbon Based Lifeforms. However, if you were a Commodore computer fan 35 years ago, it was a MOS Technologies SID (Sound Interface Device). Think of it as a sound “card” for the computers of the day. Some would say that the chip — the power behind the Commodore 64’s sound system — was the sound card of its day. Compared to its contemporaries it had more in common with high-end electronic keyboards.

The Conversation has a great write up about how the chip was different, how it came to be, the bug in the silicon that allowed it to generate an extra voice, and how it spawned the chiptune genre of music. The post might not be as technical as we’d do here at Hackaday, but it does have oscilloscope videos (see below) and a good discussion of what it took to create music on the device.

Continue reading “The 35 Year Music Synthesizer That Spawned Chiptune”