3D Magnetometer mouse in processing

FFB4SV5G0SD7J7G_MEDIUM

[etgalim] works in Solidworks extensively and wanted a more intuitive way of rotating objects onscreen. To do this, he created a mouse that responds to rotation. He put a 3D compass module inside an old mouse and wired it up to an Arduino. The Arduino then relays the I2C sensor data to the computer. So far, he has a Processing script that uses the mouse to rotate a cube, but eventually he wants to write a Solidworks plugin. It’s a bit shaky, and we think it would be a bit smoother (and cheaper) if he used gyros like the jedipad. Video after the jump.

Continue reading “3D Magnetometer mouse in processing”

Barcode scanner in Processing

barcode_sc

Reader [Nikolaus] decided that instead of using an existing image based bar code decoder, he would write his own. Using the Processing language he created a scanner that parsed the black and white pattern when a bar code was centered on the image. His code then parsed that data and compared it with the initializing character to provide a reference. Currently his scanner supports three character sets of the Code 128 encoding, and provided his complete code so that others could add as they see fit. He admits that the code is a bit messy due to the lengthy character tables, but very straight forward.

Remote image processing in JavaScript

[Tom] wrote in to tell us about his JavaScript project for motion detection. It ties together two ideas we’ve talked about recently. The first is doing image processing in-browser using Canvas(), which we’ve seen employed in captcha breaking. The second is offloading heavy processing to browsers, which we saw recently in the MapReduce implementation. [Tom] is using JavaScript to compare consecutive images to determine if there’s any motion. He did this as part of MJPG-Streamer, a program for streaming images from webcams. It can run on very limited hardware, but image processing can be very intensive. Doing the image processing in-browser makes up for this limitation and means that a custom client program doesn’t have to be written. You can find the code here and a PDF about the proof of concept.

Laughing Man in Processing

laughing_man

The Laughing Man is the antagonist from the anime series Ghost in the Shell: Stand Alone Complex. During each of his public appearances in the series he manages to hack all video feeds/cyborg eyes in the vicinity to obscure his face with the logo above.

[Ben Kurtz] had been watching the series recently and realized he could put together a similar effect using Processing. The interesting bit, and what makes this more fun than a simple demo, is that he’s using the OpenCV library. OpenCV is a open source computer vision library. [Ben] uses it to handle the facial recognition in Processing and then apply the image.

It’s only 100 lines and we wonder what other fun tricks could be employed. Here’s a Hack a Day skull you can swap in for the logo.

[thanks dakami]

Processing 1.0

Processing, the open source programming language designed for artists and other creative types, finally went 1.0. Processing inspired numerous outpourings of creativity and beauty, from interactive art installations to sound sculptures. Improvements to Processing include OpenGL anti-aliasing, an extensible Tools menu, and the XML library included by default. You can read up on the changes or download Processing and start playing with it yourself.

[via Create Digital Motion]

Wiimote head tracking in Processing


[Manuel] has been playing around with [Johnny Lee]’s Wiimote head tracking code. He’s posted a preliminary port outlining the code in the Processing environment. It relies on darwiinremoteOSC so you won’t see this outside of OSX, but it should help you out if you’re trying to do this is in Processing on another platform.

[via Create Digital Motion]

[photo: nicolasnova]

Audience Pong and RC Trash bins: An intro to TEI

This past weekend, I had the chance to visit this year’s Tangible, Embedded, and Embodied Interaction Conference (TEI) and catch up with a number of designers in the human-computer-interaction space. The conference brings together a unique collection of artists, computer scientists, industrial designers, and grad students to discuss computer interactivity in today’s world. Over the span of five days (two for workshops, and three for paper presentations), not only did I witness a number of today’s current models for computer interactivity (haptics, physical computing with sensors), I also witnessed a number of excellent projects: some developed just to prove a concept, others, to present a well-refined system or workflow. It’s hard to believe, but our computer mouse has sat beneath our fingertips since 1963; this conference is the first place I would start looking to find new ways of “mousing” with tomorrow’s technology.

Over the next few days, I’ll be shedding more light on a few projects from TEI. (Some have already seen the light of day.) For this first post, though, I decided to highlight two projects tied directly to the conference culture itself.

Before each lunch break, the audience was invited to take part in an audience-driven interactive game of “Collective” Pong. With some image processing running in the background, players held up pink cards to increase the height of their respective paddle–albeit by a miniscule amount. The audience member’s corresponding paddle weight was mapped to their respective marker location on the screen (left or right). It turns out that this trick is a respectful nod back to its original performance by [Loren Carpenter] at Siggraph in 1991. With each audience member performing their own visual servoing to bring the paddle to the right height, we were able to give the ball a good whack for 15 minutes while lunch was being prepared.

TEI_2015Cards

Next off, the conference’s interactivity spread far beyond the main conference room. During our lunch breaks we had the pleasure of discarding our scraps in a remotely operated trash bin. Happily accepting our refuse, this bin did a quick jiggle when users placed items inside. Upon closer inspection, a Roomba and Logitech camera gave it’s master a way of navigating the environment from inside some remote secret lair.

Overall, the conference was an excellent opportunity to explore the design space of tinkerers constantly re-imagining the idea of how we interact with today’s computers and data. Stay tuned for more upcoming projects on their way. If you’re curious for more details on the papers presented or layout of the conference, have a look at this year’s website.

Continue reading “Audience Pong and RC Trash bins: An intro to TEI”