Human-Machine Interface Projects at TEI 2016

For many of us, interacting with computers may be as glorious as punching keys and smearing touch screens with sweaty fingers and really bad posture. While functional, it’s worth reimagining a world where our conversation with technology is far more intuitive, ergonomic, and engaging. Enter TEI, an annual conference devoted to human-computer interaction and a landmark for novel projects that reinvent the conventional ways we engage our computers. TEI isn’t just another sit-down conference to soak in a wealth of paper talks. It’s an interactive weekend that combines these talks with a host of workshops provided by the speakers themselves.

Last year’s TEI brought us projects like SPATA, digital calipers that sped up our CAD modeling by eliminating the need for a third hand, and TorqueScreen, a force-feedback mechanism for tablets and other handhelds.

Next February’s conference is no exception for new ways to interact with novel technology. To get a sense of what’s to come, here’s a quick peek into the past from last year’s projects:

Continue reading “Human-Machine Interface Projects at TEI 2016″

HuddleLamp turns Multiple Tablets into Single Desktop

Imagine you’ve got a bunch of people sitting around a table with their various mobile display devices, and you want these devices to act together. Maybe you’d like them to be peepholes into a single larger display, revealing different sections of the display as you move them around the table. Or maybe you want to be able to drag and drop across these devices with finger gestures. HuddleLamp lets you do all this.

How does it work? Basically, a 3D camera sits above the tabletop, and watches for your mobile displays and your hands. Through the magic of machine vision, a server sends the right images to each screen in the group. (The “lamp” in HuddleLamp is a table lamp arranged above the space with a 3D camera built into it.)

A really nice touch is that the authors also provide JavaScript objects that you can embed into web apps to enable devices to join the group without downloading special software. A new device will flash an identifying pattern that the computer vision routine will recognize. Once that’s done, the server starts sending the correct parts of the overall display to the new device.

The video, below the break, demonstrates the possible interactions.

Continue reading “HuddleLamp turns Multiple Tablets into Single Desktop”

Pen based input improvements

Lately we’ve been focusing on multitouch technologies, but that doesn’t mean there isn’t interesting research going on in other areas of human-computer interaction. [Johnny Lee] posted a roundup of some the work that [Gonzalo Ramos] and others have done with pen based input. The video embedded above shows how pressure can be used to increase control precision. Have a look at his post to see how pen gestures can be used for seamless workspace sharing and how pen rolling can give additional control.