Don’t watch [Jason Hotchkiss]’s video if flashing lights or bleepy-bloopy synthesizer noises give you seizures. Do watch, however, if you’re interested in a big honeycomb-shaped LED matrix being driven at audio frequencies through a dedicated square-wave synthesizer that’s built in.
The LED panel in question is housed in a snazzy laser-cut, honeycomb-shaped bezel: a nice change from the standard square in our opinion. The lights are 1/2 watt (whoa!) whites, and the rows and columns are driven by transistor drivers that are in turn controlled by shift registers. We’re not entirely sure how the matrix is driven — we’d love to see a circuit diagram — but it looks like it’s some kind of strange, non-scanning mode where all of the column and row drives are on at once. Whatever, it’s art.
And it’s driven by logic chips making audio-frequency square waves. Two of these are fed into an LFSR and into an R-2R DAC and then into the shift registers. The output is chaos, but the audio and the visuals do seem to influence each other. It’s an audio-visual embodiment of some of my wildest Logic Noise fantasies. Pretty cool. Enjoy the video.
Don’t throw those old VGA monitors away, turn them into works of art with [danjovic] and VGA Blinking Lights. This circuit uses a PIC16F688 to generate VGA video. Not just a random spray of monochrome dots either. VGA Blinking Lights puts up an ever-changing display of 48 colored squares.
Originally created for the square inch contest, VGA Blinking Lights could hide behind a quarter. [Danjovic] dusted his project off and entered it in The 1 kB Challenge. The code is written in PIC assembly. The final hex used to generate the squares clocks in at 471 words. Since the PIC uses a 14 bit word, that’s just over 824 bytes. Plenty of space for feature creep!
Video is generated with a twist on the R2R DAC. [Danjovic] tweaked the resistor values a bit to obtain the correct voltage levels for the VGA standard. The color of the squares themselves are random, generated using a Galois Linear Feedback Shift Register (LFSR).
With only a handful of components, and a BOM cost under $5, this would be a fun evening project for any hardware hacker.
If you have a cool project in mind, there is still plenty of time to enter the 1 kB Challenge! Deadline is January 5, so check it out and fire up your assemblers!
How many integrated circuits do you need to build up a power supply that’ll convert mains AC into a stable DC voltage? Would you believe, none? We just watched this video by [The Current Source] (embedded below), where he builds exactly that. If you’re in the mood for a very well done review of diode bridges as well as half- and full-wave rectifiers, you should check it out.
First off, [TCS] goes through the basics of rectification, and demonstrates very nicely on the oscilloscope how increasing capacitance on the output smooths out the ripple. (Hint: more is better.) And then it’s off to build. The end result is a very simple unregulated power supply — just a diode bridge with some capacitors on the output — but by using really big capacitors he gets down into the few-millivolt range for ripple into a constant load.
The output voltage of this circuit will depend on the average current drawn, but for basically static loads this circuit should work well enough, and the simplicity of just tossing gigantic capacitors at the problem is alluring. (We would toss in a linear regulator somewhere.)
Quibbling over circuit designs isn’t why you’re watching this video, though. It’s because you want to learn something. Check out the rest of his videos as well. [TCS] has only been at it a little while, but it looks like this is going to be a channel to watch.
It’s that time of year again. The nights are getting longer and the leaves are turning. The crisp fall air makes one’s thoughts turn to BattleBots: pumpkin-skinned BattleBots.
If you’re asking yourself, “could a laser-cut plywood bot, sheathed in a pumpkin, stand up against an all-metal monster”, you haven’t seen BattleBots before. Besides the hilarious footage (see video embedded below), a lot of the build is documented, from making a CAD model of a pumpkin to laser-cutting the frame, to “testing” the bot just minutes before the competition. (That has to be a good idea!)
The footage of the pumpkinbot’s rival, Chomp, is equally cool. We love that the hammer weapon is accelerated so quickly that Chomp actually lifts in the air, just as Newton would have predicted. We’re not sure if the fire weapon is good for anything but show, and facing plywood pumpkinbots, but we love the effect.
Like all our meetups, we’ve gathered some of the neatest technologists to spill the beans on what they’re doing and how they’re doing it. Madison Maxey, founder of Loomia and designer of soft, blinky circuits will be there. Dr. Ellen Jorgensen, co-founder and executive director of Genspace, the citizen science biotech ‘hackerspace’ in the heart of New York will be there too. Kari Love & Matthew Borgatti of Super-Releaser, most famous for their super cute pneumatic soft robots will also be there. It’s still up in the air if we’ll be racing these robots. Of course there will also be opportunities for you to present a lightning talk at the meetup.
Piano rolls are the world’s longest-lasting recording medium, and its first digital one. They were mass-produced from 1896 to 2008, and you can still get some made today, although they’re a specialty item. The technology behind them, both on the player and the recorder side, is simply wonderful.
[lwalkera] sent us in this marvelous video (embedded below) that provides a late-80s peek inside the works of QRS Records, and the presenter seems to be loving every minute of it.
Player pianos are cool enough, with their “draw bar” pulling air through the holes in the paper roll as it goes by, and pneumatically activating the keys. But did you ever think of how the rolls are made?
If a picture is worth a thousand words, a video must be worth millions. However, computers still aren’t very good at analyzing video. Machine vision software like OpenCV can do certain tasks like facial recognition quite well. But current software isn’t good at determining the physical nature of the objects being filmed. [Abe Davis, Justin G. Chen, and Fredo Durand] are members of the MIT Computer Science and Artificial Intelligence Laboratory. They’re working toward a method of determining the structure of an object based upon the object’s motion in a video.
The technique relies on vibrations which can be captured by a typical 30 or 60 Frames Per Second (fps) camera. Here’s how it works: A locked down camera is used to image an object. The object is moved due to wind, or someone banging on it, or any other mechanical means. This movement is captured on video. The team’s software then analyzes the video to see exactly where the object moved, and how much it moved. Complex objects can have many vibration modes. The wire frame figure used in the video is a great example. The hands of the figure will vibrate more than the figure’s feet. The software uses this information to construct a rudimentary model of the object being filmed. It then allows the user to interact with the object by clicking and dragging with a mouse. Dragging the hands will produce more movement than dragging the feet.
The results aren’t perfect – they remind us of computer animated objects from just a few years ago. However, this is very promising. These aren’t textured wire frames created in 3D modeling software. The models and skeletons were created automatically using software analysis. The team’s research paper (PDF link) contains all the details of their research. Check it out, and check out the video after the break.