This little LED rig fades in time to music. The hardware itself is quite simple, some LEDs connected to the PWM pins of an Arduino. But the signal processing is happening on a computer using a Python script.
Many of the projects we see which pulse lights to music use the MSGEQ7 chip to perform hardware processing on the audio signal. But since [Zolmeister] is using a computer to play his tunes he took a different route. His Linux box uses PulseAudio to handle sound. This allows him to record from the audio playback which provides an internal source for the pyAudio package. His Python script saves snippets of the streaming audio to .wav files. I then normalizes the volume level and uses the amplitude to set a PWM value before deleting the sample and moving onto the next. These values are pushed to the Arduino at 115200 baud to achieve the results seen in the video after the break.
Why write a .wav to disk? Couldn’t the sample processing be done entirely in memory and save you some disk thrashing?
Well, the video did it no justice, unless it’s my computer that has the audio and video way out of sync… and the poor viewing angle. Nice effort though if they do actually match up and you can see something other than his computer screens.
Indeed, the video was recorded and the audio overlayed, as I have no speakers and only my USB headphones. The LEDs do match up well in real life though.
yeah I expected a high amount of lag when I clicked to watch the video, based on the description of how it works.
At least I wasn’t disappointed because I had low expectations. Whatever happened to hardware (instant analog) versions of this, which actually work well? I’d like to see more of those.
I think it was intended to be seen from farther away. They were utilizing the difference in speed of light and sound to sync up the music to the lights. There was about a 500ms mismatch which can be properly synced by stepping back about 700 feet. ;-)
only if you had your ears at the original location and your eyes over 700 feet away, otherwise it’d be even farther off.
I think it’s pretty good for such a ‘basic’ setup.
Incidentally I saw this in the related video section and that’s sort of interesting too:
https://www.youtube.com/watch?v=_WYmzph21EM&feature=player_embedded#!&hd=1
Ok, so these are all connected together and not individually controlled LEDs? Next step would be to cut out the dev board and just go with a small uC. Maybe utilize the Micronucleus bootloader posted earlier this month?
Still, that delay looks like it’d drive me nuts.
i dont really understand way you have to save every sample into a wave file after you read it with -> data = stream.read(chunk) -> using this data directly would prevent a lot of lag!!!.
i use the fromstring from the numpy package to convert it to a list:
data = fromstring(stream.read(NUM_SAMPLES), dtype=short)
So why not just connect an audio feed into the arduino and let it work it out? Seems quite a convoluted process / overuse of CPU cycles to me.
I remember the days when this would be done using a handfull of passives & maybe a couple of op-amps.
Actually, re-reading the software side, it’s just plain awful.
Put the arduino in the bin and go and buy a book on electronics, “Getting Started in Electronics” by Forrest Mims contains several very basic circuits that do this stuff without a micro.
i made mine with individual transistors, 1 for bass, 1/mid, 1/treb and 1/master-preamp.
no uControllers or desktop computer needed, the 5v doesnt even have to be 5v, or regulated.
still, he learned valuable computer interfacing skills, so i guess it makes ity okay :)
I run audio visualization effects through a computer and then out to an Arduino, using Processing.org. At no point is the audio ever saved to a file, and I get realtime effects across a 128 pixel RGB array.
http://www.youtube.com/watch?v=D8PRIf9joyQ
His beat detection is very weak. He should try using a sliding cross-correlation window instead. Low pass-filtering is a bit simpler but does not provide results as good.