[youtube=http://www.youtube.com/watch?v=sBgg33J695I]
It looks like Toshiba has a webcam-based multi-touch display on the way. The video shows an iPod-esque photos album interface where the user stands in front of the display and manipulates it with both hands. The difference between this and some of the other multi-touch displays we’ve seen is that there is no touching necessary (goodbye fingerprints!). The user’s image is superimposed on the screen in a way that reminds us of the original Playstation Eye. Obviously this is much more refined, making us wonder if it’s the better camera, better processing, or both.
[Thanks Risingsun]
Anyone think about how stupid it sounds saying ‘multitouch’ as a buzzword when this is obvious it it not a touch input at all?
Cool tech & a cool demo, but I’m not sure it’s something I want. If I’m standing in front of a display, I’d rather have a mouse or stylus.
Really, I wish my laptop had a trackball like the old powerbooks, but that probably won’t ever happen…
that just seems awkward…personally i would need something tactile.
Doesn’t really look that accurate to be honest
just hack a playstation eye or normal webcam if you want this. Just a bit of code done here. Looks pretty crap.
@dc2
+1
@Brad
yeah… multi-gesture… multi-point… etc
no touchy.
Would feel right at home with those guys that signal aircraft during taxi to gateway
“Commercial webcam multi-touch coming soon
filed under: multitouch hacks”
–>
“Commercial webcam gesture input makes tasks that where previously achieved quite easily, less responsive and more awkward
filed under: site filler”
Have you thought this is made by the same people who are making the technology behind project Natal for XBOX360..
talk about SLOW and laggy
I just got an HTC HD2 and it is the first time I had multitouch.
Its amazing, I love it.
I also keep telling my girlfriend that I am going to multitouch her. She finds this decidedly less funny than I do.
thats the same as Playstation did with EyeToy years ago. just new functions.
Definitely too slow and laggy, and seems inaccurate too. Could be much more impressively accurate if it was working with colour reference dots maybe, rather than needing to pick between hands and faces etc. OR maybe if you set up a lens with an incredibly short depth of field so everything was a blur except the 6″ plane in which hands should move. But that’s a tad restrictive. Thinking out loud fail.
@Pete I like that, I might see if the gf finds it as funny as I did tonight ;)
totally agree with the people pointing out that there’s actually no touches in computer vision tracked gestures; I’ve playing a little with this kind of stuff and normally I refer to it as “touchless” interfaces: you can see some of my sketching here. It’s an interesting interaction model, especially if you don’t limit yourself to mimic mouse/touch gestures.