Turning anything into a touch sensor

This year at the CHI conference in Austin, [Munehiko Sato], [Ivan Poupyrev], and [Chris Harrison] out of the Disney research lab in Pittsburgh demonstrated their way to make touch sensors out of anything. Not only to they suggest using the surface of your skin to control cell phones and MP3 players, they’re also able to recognize touch gestures, like poking or grasping an object. That sounds a little heady, so check out the video of the Touché tech in action.

Like the capacitive touch sensors in our phones and tablets, Touché measures the rise and fall of a capacitor’s charge over time. Unlike  other touch sensors, Touché scans the capacitor at different rates, allowing for a ‘capacitive profile’ that is used to recognized touch gestures.

The applications for this tech are nearly innumerable; the team demonstrated scolding someone for eating cereal with chopsticks (yeah, we know…), an on-body music player interface, and gestures for an office doorknob that notifies passersby if you’ve stepped out for a minute or are gone for the day.

It’s a very interesting build, and we give it two weeks until someone replicates this build. We’ll be sure to post it then.

Continue reading “Turning anything into a touch sensor”

Reaching out to a touch screen with a microcontroller

We love capacitive touch screens. They’re much more robust than resistive touch screens and if the UI is programmed well they produce a great user experience. But getting your electronics project to interact with one is a bit tough. [RobB] has been experimenting in that area, and managed to build a simple touchscreen actuator for microcontroller use.

In the video after the break you can see his proof of concept. He’s using an Arduino to enter the number 2 on an Android  iOS calculator app once every second. It doesn’t take much to pull off this trick, [RopB] just taped a piece of tin foil to the screen and connected it to the Arduino with a jumper wire. The pin is left floating until a screen tap is needed, at which point it is pulled to ground.

A custom app operating at slow speeds could use this as an input technique. Two pieces of foil (one acting as clock, the other data) would provide a rudimentary serial transfer system.

Continue reading “Reaching out to a touch screen with a microcontroller”

Multitouch table uses a Kinect for a 3D display

[Bastian] sent in a coffee table he built. This isn’t a place to set your drinks and copies of Make, though: it’s a multitouch table with a 3D display. Since no description can do this table justice, take a look at the video.

The build was inspired by the subject of this Hackaday post where [programming4fun] was able to build a ‘holographic display’ using a regular 2D projector and a Kinect. Both builds work on the principle of redrawing the 3D space in relation to the user’s head – as [Bastian] moves his head around the coffee table, the Kinect tracks his location and moves the 3 dimensional grid of boxes in the opposite direction. It’s extremely clever, and looks to be a promising user interface.

In addition to a Kinect, the coffee table uses a Microsoft Surface-like display; four infrared lasers are placed at the corner and detected with a camera next to the projector in the base.

After the break you can see the demo video and a gallery of the images [Bastion] put up on the NUI group forum.

Continue reading “Multitouch table uses a Kinect for a 3D display”

Glove-based touch screen from a CRT monitor

Here’s a bulky old CRT monitor used as a touch-screen without any alterations. It doesn’t use an overlay, but instead detects position using phototransistors in the fingertips of a glove.

Most LCD-based touch screens use some type overlay, like these resistive sensors. But cathode-ray-tube monitors function in a fundamentally different way from LCD screens, using an electron gun and ring of magnets to direct a beam across the screen. The inside of the screen is coated with phosphors which glow when excited by electrons. This project harness that property, using a photo transistor in both the pointer and middle finger of the glove. An FPGA drives the monitor and reads from the sensors. It can extrapolate the position of the phototransistors on the display based on the passing electron beam, and use that as cursor data.

Check out the video after the break to see this in action. It’s fairy accurate, but we’re sure the system can be tightened up a bit from this first prototype. There developers also mention that the system has a bit of trouble with darker shades.

Continue reading “Glove-based touch screen from a CRT monitor”

Single hand keyboard for tablets

To us it makes a lot of sense to hold the tablet in one hand and type with the other. That’s exactly how [Adam Kumpf] has implemented this one-handed typing interface which was originally conceived by [Doug Engelbart].

As you can see, there’s a large contextual area for each finger on your right hand. Letters and navigational keystrokes are input through this interface based on single touches, or combinations up to and including all five digits. This offers up 32 possible combinations (including all on and all off) which is enough to cover the modern English alphabet.

[Adam’s] demo page works for most tablets so give it a whirl. Yes, it works with iDevices too which is a surprise as we would have thought this was using Flash. If you’re not near a touch-sensitive device you can get the gist of the operation from the demo video embedded after the break.

Now, who’s going to be the first to make this into a replacement keyboard on iOS 5?

Continue reading “Single hand keyboard for tablets”

Cloud Mirror adds Internet to your morning ritual

This mirror has a large monitor behind it which can be operated using hand gestures. It’s the result of a team effort from [Daniel Burnham], [Anuj Patel], and [Sam Bell] to build a web-enabled mirror for their ECE 4180 class at the Georgia Institute of Technology.

So far they’ve implemented four widget for the system. You can see the icons which activate each in the column to the right of the mirror. From top to bottom they are Calendar, News, Traffic, and Weather. The video after the break shows the gestures used to control the display. First select the widget by holding your hand over the appropriate icon. Next, bring that widget to the main display area by swiping from right to left along the top of the mirror.

Hardware details are shared more freely in their presentation slides (PDF). A sonar distance sensor activated the device when a user is close enough to the screen. Seven IR reflectance sensors detect a hand placed in front of them. We like this input method, as it keep the ‘display’ area finger-print free. But we wonder if the IR sensors could be placed behind the glass instead of beside it?

Continue reading “Cloud Mirror adds Internet to your morning ritual”

Multitouch tower defense uses physical towers

If you’re tired of playing flash games with a mouse, perhaps you’ll draw inspiration from this project. Arthur built a multitouch interface that uses objects as part of the control scheme. In the image above you can see that the game board for a tower defense game is shown on the display. There is a frustum-shaped game piece resting on the surface. Just place that piece where you want to build your next tower, and then select the tower type from the list.

The controller itself is pretty straight-forward. The surface is a piece of acrylic topped with some light diffusing material. A projector shines through another acrylic window on the side of the unit, reflecting on a mirror positioned at a 45 degree angle. As for the multitouch detection, the hardware uses a series of UV LEDs along with a modified PS3 eye camera. [Arthur] chose the reacTIVision software package to process the input from the camera. Check out a couple of videos after the break to see the hardware, and some game play.

Continue reading “Multitouch tower defense uses physical towers”