Objectifier: Director Of Domestic Technology

book-example[Bjørn Karmann]’s Objectifier is a device that lets you control domestic objects by allowing them to respond to unique actions or behaviour, using machine learning and computer vision. The Objectifier can turn on a table lamp when you open a book, and turn it off when you close the book. Switch on the coffee maker when you place the mug next to the pot, and switch it off when the mug is removed. Turn on the belt sander when you put on the safety glasses, and stop it when you remove the glasses. Charge the phone when you put a banana in front of it, and stop charging it when you place an apple in front of it. You get the drift — the possibilities are endless. Hopefully, sometime in the (near) future, we will be able to interact with inanimate objects in this fashion. We can get them to learn from our actions rather than us learning how to program them.

The device uses computer vision and a neural network to learn complex behaviours associated with your trigger commands. A training mode, using a phone app, allows you to train it for the On and Off actions. Some actions require more human effort in training it — such as detecting an open and closed book — but eventually, the neural network does a fairly good job.

The current version is the sixth prototype in the series and [Bjørn] has put in quite a lot of work refining the project at each stage. In its latest avatar, the device hardware consists of a Pi Zero, a Raspberry-Pi camera module, an SMPS power brick, a relay block to switch the output, a 230 V plug for input power and a 230 V socket outlet for the final output. All the parts are put together rather neatly using acrylic laser cut support pieces, and then further enclosed in a nice wooden enclosure.

On the software side, all of the machine learning part is taken care of using “Wekinator” — a free, open source software that allows building musical instruments, gestural game controllers, computer vision or computer listening systems using machine learning. The computer vision is handled via Processing. All the code is wrapped using openframeworks, with ml4A providing apps for working with machine learning.

All of the above is what we could deduce looking at the pictures and information on his blog post. There isn’t much detail about the hardware, but the pictures are enough to tell us all. The software isn’t made available, but maybe this could spur some of you hackers into action to build another version of the Objectifier. Check out the video after the break, showing humans teaching the Objectifier its tricks.

Continue reading “Objectifier: Director Of Domestic Technology”

World's tiniest violin, using Project Soli and finger gestures

World’s Tiniest Violin Uses Radar And Machine Learning

The folks at [Design I/O] have come up with a way for you to play the world’s tiniest violin by rubbing your fingers together and actually have it play a violin sound. For those who don’t know, when you want to express mock sympathy for someone’s complaints you can rub your thumb and index finger together and say “You hear that? It’s the world’s smallest violin and it’s playing just for you”, except that now they can actually hear the violin, while your gestures control the volume and playback.

[Design I/O] combined a few technologies to accomplish this. The first is Google’s Project Soli, a tiny radar on a chip. Project Soli’s goal is to do away with physical controls by using a miniature radar for doing touchless gesture interactions. Sliding your thumb across the side of your outstretched index finger, for example, can be interpreted as moving a slider to change the numerical value of something, perhaps turning up the air conditioner in your car. Check out Google’s cool demo video of their radar and gestures below.

Project Soli’s radar is the input side for this other intriguing technology: the Wekinator, a free open source machine learning software intended for artists and musicians. The examples on their website paint an exciting picture. You give Wekinator inputs and outputs and then tell it to train its model.

The output side in this case is violin music. The input is whatever the radar detects. Wekinator does the heavy lifting for you, just give it input like radar monitored finger movements, and it’ll learn your chosen gestures and perform the appropriately trained output.

[Design I/O] is likely doing more than just using Wekinator’s front end as they’re also using openFrameworks, an open source C++ toolkit. Also interesting with Wekinator is their use of the Open Sound Control (OSC) protocol for communicating over the network to get its inputs and outputs. You can see [Design I/O]’s end result demonstrated in the video below.

Continue reading “World’s Tiniest Violin Uses Radar And Machine Learning”

Texel: Art Tracks You, Tracks Time

French robot-artist [Lyes Hammadouche]  tipped us off to one of his latest works: a collaboration with [Ianis Lallemand] called Texel. A “texel” is apparently a time-pixel, and the piece consists of eight servo-controlled hourglasses that can tip themselves over in response to viewers walking in front of them. Besides making graceful wavelike patterns when people walk by, they also roughly record the amount of time that people have spent looking at the piece — the hourglasses sit straight up when nobody’s around, resulting in a discrete spatial representation of people’s attentions to the piece: texels.

We get jealous when we see artists playing around with toys like these. Texel uses LIDAR scanners, Kalman-filtered naturally, to track the viewers. openFrameworks, OpenCV, and ROS. In short, everything you’d need to build a complex, human-interactive piece like this using completely open-source tools from beginning to end. Respect!

Continue reading “Texel: Art Tracks You, Tracks Time”

7-Segment Display Matrix Visualizes More Than Numbers

digit-7-segment-visualization

You can pretty much tell that this is an outstretched hand shown on a large grid of 7-segment displays. But the only reason you have to look twice is because it is a still photo. When you see the video below it’s more than obvious what you’re looking at… partly because the device is being used as an electronic mirror.

In total there are 192 digits in the display. To make things easier, four-digit modules were used, although we still couldn’t resist showing you the well-organized nightmare that is the wiring scheme. Each module is driven by its own discrete Arduino (driving 28 LEDs as they’re apparently not connecting the decimal point). All 48 Arduino boards receive commands from a Raspberry Pi which is running openFrameworks to generate the animations.

Now of course the project was well under way before [Peter] discovered a similar display from more than a year ago. But we’re glad that didn’t stop them from forging ahead and even building on the idea. They added a camera to the display’s frame which lets it mirror back whatever is in front of it.

What popped into our minds was one of the recent entries for the Trinket contest.

Continue reading “7-Segment Display Matrix Visualizes More Than Numbers”

Glockentar: A Guitar + Glockenspiel Mashup

This unique electronic instrument combines a chopped up guitar and a hacked apart glockenspiel with an Arduino. [Aaron]’s Glockentar consists of guitar hardware and glockenspiel keys mounted to a wood body. Solenoids placed above the keys actuate metal rods to play a note.

Under the hood, an Arduino connects the pieces. The conductive pick closes a circuit, which is a digital input into the Arduino. This actuates the corresponding solenoid to play the glockenspiel key, and sends a character to a computer over serial.

On the computer, an openFrameworks based program creates lighting that is projected onto each string. MadMapper is used for projection mapping, which maps the openFrameworks output to each string. Video is passed between applications using the Syphon framework.

[Aaron] has provided a write up that goes into details, including the Arduino and openFrameworks source for the project. There’s also a video overview and demo of the Glockentar after the break.

Continue reading “Glockentar: A Guitar + Glockenspiel Mashup”

Hackaday Links: February 12, 2012

This is why digital picture frames were invented

[Petros] sent in this video of his visualization of Van Gogh’s Starry Night. He did this with openFrameworks and also made a version that reacts to sound. Is anyone else reminded of that one scene in Vincent and the Doctor?

A boat’s a boat, but a mystery box can be anything

[Rick] wanted to build a lock pick training station for the Eugene Maker Space, but he needed a way to make it interesting. What could be better than a mystery box? When you pick the deadbolt, open the box up and you’ll get a prize. Just make sure you put something of yours in the box for the next person.

3D printer prints its own case

Because the 3d printer community isn’t segmented enough, [Sublime] decided to design a new one. Here’s where it gets cool: the Tantillus can print its own case, and can ‘daisy chain’ to another Tantillus so only one set of electronics are needed. Interesting ideas afoot.

A diamond says I love you, but a duct tape rose says I’ll fix that for you

Valentine’s Day is coming up, so if you haven’t already made dinner reservations, you’re probably up the creek. How about making a duct tape rose for that special person in your life. Bonus: a dozen costs $3, and they won’t die in a week.

Using keypads over serial or SPI

[Leniwiec] sent in a tutorial on connecting keypads to a microcontroller with a serial or SPI interface. If you want to build a calculator, this is your chance. We’d use this for an Apollo Guidance Computer, though.

Get Digital Plastic Surgery Thanks To OpenFrameworks And Some Addons

[Kyle McDonald] is trying out a new look, at least in the digital world, with the help of some openFrameworks video plugins. He’s working with [Arturo Castro] to make real-time facial substitution as realistic as possible. You can see that [Arturo’s] own video has a different take on shading and color of the facial alterations that makes them a bit less realistic than what [Kyle] was able to accomplish (see that clip after the break).

The setup depends on some facial tracking software developed by [Jason Saragih]. That package is wrapped in ofxFaceTracker (already linked at the top of this article) which makes it play nicely with openFrameworks. From there, it’s just a matter of image processing. If you think you’re up to the challenge, grab your own copies of the source code and get to work. We’re shocked by how real this looks, even when [Kyle] grabs his cheeks and stretches them out. If someone can fix some of the artifacts around the edges of the sampled faces this would be ready to use when video-conferencing.

It kind of makes us think of technology seen in The Running Man.

Continue reading “Get Digital Plastic Surgery Thanks To OpenFrameworks And Some Addons”