Imagine you’ve got a bunch of people sitting around a table with their various mobile display devices, and you want these devices to act together. Maybe you’d like them to be peepholes into a single larger display, revealing different sections of the display as you move them around the table. Or maybe you want to be able to drag and drop across these devices with finger gestures. HuddleLamp lets you do all this.
How does it work? Basically, a 3D camera sits above the tabletop, and watches for your mobile displays and your hands. Through the magic of machine vision, a server sends the right images to each screen in the group. (The “lamp” in HuddleLamp is a table lamp arranged above the space with a 3D camera built into it.)
The video, below the break, demonstrates the possible interactions.
If you want to dig deeper into how it all works together, download their paper (in PDF) and give it a read. It goes into detail about some of the design choices needed for screen detection and how the depth data from the 3D camera can be integrated with the normal image stream.