Interactive Visual Programming With Vvvv

Did you ever feel the urge to turn the power of image processing and OCR into music? Maybe you wanted to use motion capture to illustrate the dynamic movement of a kung-fu master in stunning images like the one above?  Both projects were created with the same software.

vvvv -pronounced ‘four vee’, ‘vee four’ and sometimes even ‘veeveeveevee’- calls itself ‘a multi purpose framework’, which is as vague and correct as calling a computer ‘a device that performs calculations’. What can it do, and what does the framework look like? I’d like to show you.

Since its first release in 1998 the project has never officially left beta stage. This doesn’t mean the recent beta releases are unstable, it’s just that the people behind vvvv refrain from declaring their software ‘finished’. It also provides an excuse for some quirks, such as requiring 7-zip to unpack the binaries and the UI that takes some getting used to. vvvv requires DirectX and as such is limited to Windows.

With the bad stuff out of the way, let’s take a look what vvvv can do. First, as implied by the close relationship with DirectX, it’s really good at producing graphics. An example for interactive video is embedded below the break. With its data flow/ visual programming approach it also lends itself to rapid prototyping or live coding. Modifications to a patch, as programs are called in this context, immediately affect the output.

The name ‘patch’ harkens back to the times of analog synthesizers and working with vvvv has indeed some similarities with signal processing that will make the DSP nerds among you feel right at home.

Continue reading “Interactive Visual Programming With Vvvv”

Solving Mazes With Graphics Cards

What if we told you that you are likely to have more computers than you think? And we are not talking about things that are computers while not looking like one, like most modern cars or certain lightbulbs. We are talking about the powerful machines hiding in your desktop computer called ‘graphics card’. In the ordinary gaming rig graphics cards that are much more powerful than the machine they’re built into are a common occurrence. In his tutorial [Viktor Chlumský] demonstrates how to harness your GPU’s power to solve a maze.

Software that runs on a GPU is called a shader. In this example a shader is shown that finds the way through a maze. We also get to catch a glimpse at the limitations that make this field of software special: [Viktor]’s solution has to work with only four variables, because all information is stored in the red, green, blue and alpha channels of an image. The alpha channel represents the boundaries of the maze. Red and green channels are used to broadcast waves from the beginning and end points of the maze. Where these two waves meet is the shortest solution, a value which is captured through the blue channel.

Despite having tons of cores and large memory, programming shaders feels a lot like working on microcontrollers. See for yourself in the maze solving walk through below.

Continue reading “Solving Mazes With Graphics Cards”

Neural Network Gimbal Is Always Watching

[Gabriel] picked up a GoPro to document his adventures on the slopes and trails of Montreal, but quickly found he was better in front of the camera than behind it. Turns out he’s even better seated behind his workbench, as the completely custom auto-tracking gimbal he came up with is nothing short of a work of art.

There’s quite a bit going on here, and as you might expect, it took several iterations before [Gabriel] got all the parts working together. The rather GLaDOS-looking body of the gimbal is entirely 3D printed, and holds the motors, camera, and a collection of ultrasonic receivers. The Nvidia Jetson TX1 that does the computational heavy lifting is riding shotgun in its own swanky looking 3D printed enclosure, but [Gabriel] notes a future revision of the hardware should be able to reunite them.

In the current version of the system, the target wears an ultrasonic emitter that is picked up by the sensors in the gimbal. The rough position information provided by the ultrasonics is then refined by the neural network running on the Jetson TX1 so that the camera is always focused on the moving object. Right now the Jetson TX1 gets the video feed from the camera over WiFi, and commands the gimbal hardware over Bluetooth. Once the Jetson is inside the gimbal however, some of the hardware can likely be directly connected, and [Gabriel] says the ultrasonics may be deleted from the design completely in favor of tracking purely in software. He plans on open sourcing the project, but says he’s got some internal house keeping to do before he takes the wraps off it.

From bare bones to cushy luxury, scratch-built camera gimbals have become something of a right of passage for the photography hacker. But with this project, it looks like the bar got set just a bit higher.

Continue reading “Neural Network Gimbal Is Always Watching”

Video Streaming Like Your Raspberry Pi Depended On It

The Raspberry Pi is an incredibly versatile computing platform, particularly when it comes to embedded applications. They’re used in all kinds of security and monitoring projects to take still shots over time, or record video footage for later review. It’s remarkably easy to do, and there’s a wide variety of tools available to get the job done.

However, if you need live video with as little latency as possible, things get more difficult. I was building a remotely controlled vehicle that uses the cellular data network for communication. Minimizing latency was key to making the vehicle easy to drive. Thus I set sail for the nearest search engine and begun researching my problem.

My first approach to the challenge was the venerable VLC Media Player. Initial experiments were sadly fraught with issues. Getting the software to recognize the webcam plugged into my Pi Zero took forever, and when I did get eventually get the stream up and running, it was far too laggy to be useful. Streaming over WiFi and waving my hands in front of the camera showed I had a delay of at least two or three seconds. While I could have possibly optimized it further, I decided to move on and try to find something a little more lightweight.

Continue reading “Video Streaming Like Your Raspberry Pi Depended On It”

Handheld Gimbal With Off-The-Shelf Parts

For anything involving video capture while moving, most videographers, cinematographers, and camera operators turn to a gimbal. In theory it is a simple machine, needing only three sets of bearings to allow the camera to maintain a constant position despite a shifting, moving platform. In practice it’s much more complicated, and gimbals can easily run into the thousands of dollars. While it’s possible to build one to reduce the extravagant cost, few use 100% off-the-shelf parts like [Matt]’s handheld gimbal.

[Matt]’s build was far more involved than bolting some brackets and bearings together, though. Most gimbals for filming are powered, so motors and electronics are required. Not only that, but the entire rig needs to be as balanced as possible to reduce stress on those motors. [Matt] used fishing weights to get everything calibrated, as well as an interesting PID setup.

Be sure to check out the video below to see the gimbal in action. After a lot of trial-and-error, it’s hard to tell the difference between this and a consumer-grade gimbal, and all without the use of a CNC machine or a 3D printer. Of course, if you have access to those kinds of tools, there’s no limit to the types of gimbals you can build.

Continue reading “Handheld Gimbal With Off-The-Shelf Parts”

I Am An Iconoscope

We’d never seen an iconoscope before. And that’s reason enough to watch the quirky Japanese, first-person video of a retired broadcast engineer’s loving restoration. (Embedded below.)

Quick iconoscope primer. It was the first video camera tube, invented in the mid-20s, and used from the mid-30s to mid-40s. It worked by charging up a plate with an array of photo-sensitive capacitors, taking an exposure by allowing the capacitors to discharge according to the light hitting them, and then reading out the values with another electron scanning beam.

The video chronicles [Ozaki Yoshio]’s epic rebuild in what looks like the most amazingly well-equipped basement lab we’ve ever seen. As mentioned above, it’s quirky: the iconoscope tube itself is doing the narrating, and “my father” is [Ozaki-san], and “my brother” is another tube — that [Ozaki] found wrapped up in paper in a hibachi grill! But you don’t even have to speak Japanese to enjoy the frame build and calibration of what is probably the only working iconoscope camera in existence. You’re literally watching an old master at work, and it shows.

Continue reading “I Am An Iconoscope”

Movie Encoded In DNA Is The First Step Toward Datalogging With Living Cells

While DNA is a reasonably good storage medium, it’s not particularly fast, cheap, or convenient to read and write to.

What if living cells could simplify that by recording useful data into their own DNA for later analysis? At Harvard Medical School, scientists are working towards this goal by using CRISPR to encode and retrieve a short video in bacterial cells.

CRISPR is part of the immune system of many bacteria, and works by storing sequences of viral DNA in a specific location to identify and eliminate viral infections. As a tool for genetic engineering, it’s cheaper and has fewer drawbacks than previous techniques.

Besides generating living rickrolls and DMCA violations, what is this good for? Cheap, self-replicating sensors. [Seth Shipman], part of the team of scientists at Harvard, explains in an interview below a number of possible applications. His focus is engineering cells to act as a noninvasive data acquisition tool to study neurobiology, for example by using engineered neurons to record their developmental history.

It’s possible to see how this technique can be used more broadly and outside an academic context. Presently, biosensors generally use electric or fluorescent transducers to relay a detection event. By recording data over time in the DNA of living cells, biosensors could become much cheaper and contain intrinsic datalogging. Possible applications could include long-term metabolite (e.g. glucose) monitors, chemical detectors, and quality control.

It’s worth noting that this technique is only at the proof of concept stage. Data was recorded and retrieved manually by the scientists into the bacterial genome with 90% accuracy, demonstrating that if cells can be engineered to record data themselves, accuracy and capacity are high enough for practical applications.

That being said, if anyone is working on a MEncoder or ffmpeg command line option for this, let us know in the comments.

Continue reading “Movie Encoded In DNA Is The First Step Toward Datalogging With Living Cells”