If there’s one thing about laser cutters that makes them a little difficult to use, it’s the fact that it’s hard for a person to interact with them one-on-one without a clunky computer in the middle of everything. Granted, that laser is a little dangerous, but it would be nice if there was a way to use a laser cutter without having to deal with a computer. Luckily, [Anirudh] and team have been working on solving this problem, creating a laser cutter that can interact directly with its user.
The laser cutter is tied to a visual system which watches for a number of cues. As we’ve featured before, this particular laser cutter can “see” pen strokes and will instruct the laser cutter to cut along the pen strokes (once all fingers are away from the cutting area, of course). The update to this system is that now, a user can import a drawing from a smartphone and manipulate it with a set of physical tokens that the camera can watch. One token changes the location of the cut, and the other changes the scale. This extends the functionality of the laser cutter from simply cutting at the location of pen strokes to being able to cut around any user-manipulated image without interacting directly with a computer. Be sure to check out the video after the break for a demonstration of how this works.
Making computers interact with physical objects is a favorite of the HCI gurus out there, but these builds usually take the form of image recognition of barcodes or colors. Of course there are new near field communication builds coming down the pipe, but [Andrea Bianchi] has figured out an easier way to provide a physical bridge between computer and user. He’s using magnets to interact with a tablet, and his idea opens up a lot of ideas for future tangible interfaces.
Many tablets currently on the market have a very high-resolution, low latency magnetometer meant for geomagnetic field detection. Yes, it’s basically a compass but Android allows for the detection of magnets, and conveniently provides the orientation and magnitude of magnets around a tablet.
[Andrea] came up with a few different interfaces using magnets. The first is just a magnets of varying strengths embedded into some polymer clay. When these colorful magnetic cubes are placed on the tablet, [Andrea]’s app is able to differentiate between small, medium, and large magnets.
There are a few more things [Andrea]’s app can do; by placing two magnets on an ‘arrow’ token, the app can detect the direction in which the arrow is pointing. It’s a very cool project that borders on genius with its simplicity.
You can check out [Andrea]’s demo video after the break.
Our fascination with multitouch is fairly well known, but it expands even further to cover all sorts of man machine interaction. Embedded above is a tech demo of g-speak, a spatial operating environment. The user combines gestures and spatial location to interact with on screen objects. If it seems familiar, it’s because one of the company’s founders advised on Minority Report. We doubt all this hand waving is going to catch on very quickly though. Our bet is on someone developing a multitouch Cintiq style device for people to use as a secondary monitor. It would bridge the gap between between our standard 2D interactions and gestures without making a full leap to 3D metaphors.
[Alpay Kasal] of Lit Studios and [Sam Ewen] created this patent-pending interactive mirror after being inspired by dielectric glass mirrors with built-in LCD panels, and wanting to add a human touch. The end results look like a lot of fun. You can draw on the mirror and play games. According to [Kasal], mouse emulation is essential. The installation features proximity sensors and gesturing. Any game can be set up on it, which makes the possibilities endless… except these are the same people that built LaserGames so expect no further documentation about how it works.
Opto-Isolator is an interesting art installation that was on display at the Bitforms Gallery in NYC. This single movement-tracking eye creates a statement about how we view art and is a response to the question “what if art could view us?”. The somewhat creepy display not only follows the person viewing it, but mimics blinks a second later and averts its gaze if eye contact is kept up for too long. Its creators [Golan Levin] and [Greg Baltus] have done a great job mimicking human behavior with such a simple element and the social implications of it are truly fascinating.
If they wanted to, [Levin] and [Baltus] could possibly crank up the spook factor by adding facial recognition and programming it to remember how certain people interact with it, then tailor its behavior to wink at different rates or become more shy or bold, depending on the personality of the person watching it. Of course, that would require that someone goes back to it more than once…