Bit Banging VGA From An SD Card Slot

If you’ve got some favorite electronic device that includes an SD card slot but doesn’t have a video out port you may be able to push VGA signals through the card reader conductors. That’s exactly what’s going on above with the Ben NanoNote, a sub-$100 Linux device which we’ve seen using its SD card slot as general I/O before.

The hardware to capture the signals includes a breakout board for the card slot. Free-formed on the other end of that connector card is a gaggle of resistor which handle level conversion for the VGA color signals, with a VGA cable taking it from there to the monitor. The software that makes this happen is a dirty hack, blocking all other functions while it displays a still image. But we’re sure that it can be cleaned up somewhat. Just don’t hold out hopes for full-motion video, this little guy just doesn’t have it in him.

[via Dangerous Prototypes via Slashdot]

Synkie: The Modular Synth For Video

The folks at [anyma] have been working on an analog video processor called Synkie for a while now, and we’re amazed a project this awesome has passed us by for so long.

Like a Moog or Doepfer synth, the Synkie was developed with modularity in mind. So far, [anyma] has built modules to split and combine the sync and video signals, and modules to invert, add, subtract, mix, filter and amplify those signals. The end result of all this video processing produces an output that can look like a glitched Atari, art installation, and scrambled cable station all at the same time.

The Synkie’s output reminds us of the original Doctor Who title sequence, and actually this idea isn’t far off the mark – both use video feedback that will produce anything from a phantasmagoric ‘flying through space’ aesthetic to a fractal Droste effect visualization. We’re impressed with Synkie’s capabilities, but we’re astounded by the [anyma] crew’s ability to control a video signal in real time to get what they want.

Check out a video of the Synkie after the jump. There’s also more footage of the Synkie in action on the Synkie Vimeo channel.

Continue reading “Synkie: The Modular Synth For Video”

USB Minecraft Portal

[Sprite_tm] had heard some time ago that middle of the road Nokia phones had some really interesting LCDs, they are 2.4 inch TFT’s with 320×240 resolution. He immediately got 3 beccause they are pretty low cost as well, and started working with them. Apparently these LCDs are self contained, meaning they have all the driver chips and memory on board, you just need to know the pins and commands. This too is fairly easy as they are somewhat standard setups and datasheets for similar models work in a pinch.

Once the things were working, what do you do with them? [Sprite_tm] decided to make a desktop minecraft portal. Since the animation is a bit complex in micro controller worlds, he grabbed one of the STM32F101CBs for their beefy cpu and got to work. Getting the texture from minecraft proved to be a bit of a bear as they are not static images, but are calculated on startup. A bit of C code on the PC quickly generates an appropriate pattern and is exported to the micro controller for display in its final home.

The case is made out of wood and once finished looks just like the “real” thing giving a presentation any “blockhead” would love to have on their desk.

Real-time Digital Puppetry

digital_puppet_show

If it sometimes seems that there is only a finite amount of things you can do with your kids, have you ever considered making movies? We don’t mean taking home videos – we’re talking about making actual movies where your kids can orchestrate the action and be the indirect stars of the show.

Maker [Friedrich Kirchner] has been working on an application called MovieSandbox, which is an open-source realtime animation tool. A couple of years in the making, the project is cross-platform compatible on both Windows and Apple computers (with Linux in the works), making it accessible to just about everyone.

His most recent example of the software’s power is a simple digital puppet show, which is sure to please young and old alike. Using sock puppets fitted with special flex sensors, he is able to control his on-screen cartoon characters by simply moving his puppets’ “mouths”. An Arduino is used to pass the sensor data to his software, while also allowing him to dynamically switch camera angles with a series of buttons.

Obviously something like this requires a bit of configuration in advance, but given a bit of time we imagine it would be pretty easy to set up a digital puppet stage that will keep your kids happily occupied for hours on end.

Continue reading to see a quick video of his sock puppet theater in action.

[via Make]

Continue reading “Real-time Digital Puppetry”

Tandy Color Computer (CoCo3) Color Video Playback

[John W. Linville] wrote a digital video player for the Tandy Color Computer (aka TRS-80). The decades-old hardware performs quite well considering the limited resource he had to work with. This is the second iteration of his player, and can be seen after the break playing a promo video for CoCoFEST 2011 where he’ll show it off in person.

In the most recent thread post (at the time of writing) [John] shares the methods used to get this running. FFMPEG is used on a modern computer to process the source video by separating the audio into an 8-bit 11040Hz file, and it generates several PPM files with the proper video frame rate. ImageMagick takes it from there to convert the PPM files to a bitmap format. It also processes each frame for differential changes, reducing the size to fall within the available bandwidth. They are then interleaved with the audio to produce the final format. Video is 128×192 with rectangular pixels. [John’s] already used it to watch such classics as War Games on the antiquated hardware.

Continue reading “Tandy Color Computer (CoCo3) Color Video Playback”

Camera Software Learns To Pick You Out Of A Crowd

tld_object_tracking

While the Kinect is great at tracking gross body movements and discerning what part of a person’s skeleton is moving in front of the camera, the device most definitely has its shortfalls. For instance, facial recognition is quite limited, and we’re guessing that it couldn’t easily track an individual’s eye throughout the room.

No, for tracking like that, you would need something far more robust. Under the guidance of [Krystian Mikolajczyk and Jiri Matas], PhD student [Zdenek Kalal] has been working on a piece of software called TLD, which has some pretty amazing capabilities. The software uses almost any computer-connected camera to simultaneously Track an object, Learn its appearance, and Detect the object whenever it appears in the video stream. The software is so effective as you can see in the video below, that it has been dubbed “Predator”.

Once he has chosen an object within the camera’s field of vision, the software monitors that object, learning more and more about how it looks under different conditions. The software’s learning abilities allow it to pick out individual facial features, follow moving objects in video, and can recognize an individual’s face amid a collection of others.

While the software can currently only track one object at a time, we imagine that with some additional development and computing horsepower, this technology will become even more amazing.

Continue reading “Camera Software Learns To Pick You Out Of A Crowd”

Super Refined Kinect Physics Demo

kinect_demo

Since the Kinect has become so popular among hackers, [Brad Simpson] over at IDEO Labs finally purchased one for their office and immediately got to tinkering. In about 2-3 hours time, he put together a pretty cool physics demo showing off some of the Kinect’s abilities.

Rather than using rough skeleton measurements like most hacks we have seen, he paid careful attention to the software side of things. Starting off using the Kinect’s full resolution (something not everybody does) [Brad] took the data and manipulated it quite a bit before creating the video embedded below. Skeleton data was collected and run through several iterations of a smoothing algorithm to substantially reduce the noise surrounding the resulting outline.

The final product is quite a bit different than the Kinect videos we are used to seeing, and it drastically improves how the user is able to interact with virtual objects added to the environment. As you may have noticed, the blocks that were added to the video never rarely penetrate the outline of the individual in the movie. This isn’t due to some sort of digital trickery – [Brad] was able to prevent the intersection of different objects via his tweaking of the Kinect data feed.

We’re not sure how much computing power this whole setup requires, but the code is available from their Google Code repository, so we hope to see other projects refined by utilizing the techniques shown off here.

[via KinectHacks]

Continue reading “Super Refined Kinect Physics Demo”