3D Magnetometer mouse in processing


[etgalim] works in Solidworks extensively and wanted a more intuitive way of rotating objects onscreen. To do this, he created a mouse that responds to rotation. He put a 3D compass module inside an old mouse and wired it up to an Arduino. The Arduino then relays the I2C sensor data to the computer. So far, he has a Processing script that uses the mouse to rotate a cube, but eventually he wants to write a Solidworks plugin. It’s a bit shaky, and we think it would be a bit smoother (and cheaper) if he used gyros like the jedipad. Video after the jump.

[Read more...]

Barcode scanner in Processing


Reader [Nikolaus] decided that instead of using an existing image based bar code decoder, he would write his own. Using the Processing language he created a scanner that parsed the black and white pattern when a bar code was centered on the image. His code then parsed that data and compared it with the initializing character to provide a reference. Currently his scanner supports three character sets of the Code 128 encoding, and provided his complete code so that others could add as they see fit. He admits that the code is a bit messy due to the lengthy character tables, but very straight forward.

Remote image processing in JavaScript

[Tom] wrote in to tell us about his JavaScript project for motion detection. It ties together two ideas we’ve talked about recently. The first is doing image processing in-browser using Canvas(), which we’ve seen employed in captcha breaking. The second is offloading heavy processing to browsers, which we saw recently in the MapReduce implementation. [Tom] is using JavaScript to compare consecutive images to determine if there’s any motion. He did this as part of MJPG-Streamer, a program for streaming images from webcams. It can run on very limited hardware, but image processing can be very intensive. Doing the image processing in-browser makes up for this limitation and means that a custom client program doesn’t have to be written. You can find the code here and a PDF about the proof of concept.

Laughing Man in Processing


The Laughing Man is the antagonist from the anime series Ghost in the Shell: Stand Alone Complex. During each of his public appearances in the series he manages to hack all video feeds/cyborg eyes in the vicinity to obscure his face with the logo above.

[Ben Kurtz] had been watching the series recently and realized he could put together a similar effect using Processing. The interesting bit, and what makes this more fun than a simple demo, is that he’s using the OpenCV library. OpenCV is a open source computer vision library. [Ben] uses it to handle the facial recognition in Processing and then apply the image.

It’s only 100 lines and we wonder what other fun tricks could be employed. Here’s a Hack a Day skull you can swap in for the logo.

[thanks dakami]

Processing 1.0

Processing, the open source programming language designed for artists and other creative types, finally went 1.0. Processing inspired numerous outpourings of creativity and beauty, from interactive art installations to sound sculptures. Improvements to Processing include OpenGL anti-aliasing, an extensible Tools menu, and the XML library included by default. You can read up on the changes or download Processing and start playing with it yourself.

[via Create Digital Motion]

Wiimote head tracking in Processing

[Manuel] has been playing around with [Johnny Lee]‘s Wiimote head tracking code. He’s posted a preliminary port outlining the code in the Processing environment. It relies on darwiinremoteOSC so you won’t see this outside of OSX, but it should help you out if you’re trying to do this is in Processing on another platform.

[via Create Digital Motion]

[photo: nicolasnova]

Wearable flames with fur and LED strips


[Finchronicity] over on Hackaday Projects has made a pretty awesome furry LED Vest to keep him warm and well lit at this year’s Burning Man. He is using a Teensy 3.0 that drives strips of 470 WS2811 LEDs.

The vertically aligned strips run on a continuous sequence which reaches up to 31 frames per second using precompiled animations. The effects rendered in Processing or video mapped, are captured frame by frame and stored as raw color data to an SD card. Playback uses the NeoPixel library to control the strips. The high resolution LEDs, with the video mapped fire and the long pile fur, create one of the nicest flame effects we have seen on clothing.

We’ve also seen the Teensy 3.0 and WS2811 LEDs used as a popular combination for building huge displays, a 23ft tall pyramid, and more recently in the RFID jacket at Make Fashion 2014. Have you made or seen a great Teensy/WS2811 project you would like to share with us? If so, let us know the details in the comments below.

[Read more...]