MIT is debuting their latest advancement in technology, a multitouch screen that also functions as a gestural interface. The multitouch aspect is nothing new, the team explains how traditional interfaces using LEDs or camera systems do work, but fail to recognize gestures off-screen.
Gestures are a relatively recent highlight with the introduction of projects like Natal or perspective tracking, but fail to work at closer distances to the screen. MIT has done what seems the impossible by combining and modifying the two to produce the first ever multitouch close proximity gestural display.
And to think, just a couple of months ago the same school was playing with pop-up books.
[via Engadget]
That looks pretty cool!
I wouldn’t be able to use it until they eliminated the lag, but cool nonetheless.
Gotta love the models they used.
It looks like they’re planning to take this to Siggraph (look at the textures on the wall in the “walk-through” demo).
@RoboGuy
It seems like the lag would be fixed by actually manufacturing the screen like they suggest in the video, rather than their ‘demo’ screen. The reason I say this is because you currently have lag occurring during the video capture (using those cameras) and the processing of the captured images.If you were to use all built in technology with optimized code it seems like you should be able to make it pretty lag-free.
We have seen similar technology for a while, http://hackaday.com/2006/02/21/low-cost-sensing-and-communication-with-an-led/2/
(notice it measures placement and distance, the same idea would be used on the screen, using different sensors)
I agree on the models being awesome, and once again I have to hand it to MIT.
It’s a cool design but looks kind of huge with the distance required. I could see this being very cool if it was done in huge scope like a wall-sized image. Then you could put people in the scene entirely.
that voiceover sounds like Toby from the Office
I love this project. It’s a great application of coded aperture imaging, basically demonstrating that in many situations where a lensless (pinhole) design is appropriate, it is possible to use a coded aperture to get more light… and depth information as a byproduct.
Very clever.
That is so cool.
Other possible uses include 1984 style creeper-televisions. lol.
Combine this with the ultrasonic pressure system that lets you feel objects in 3d and holographic tech. That way, we can punch TV and Movie characters in the face right from the comfort of our own homes!
Also, BiDi makes me think of that South Park episode with Bono and the biggest crap.
This is very cool. I wouldn’t be surprised if high end screens started implementing these features in less than 5 years. In 10, eliminating the need for a mouse or keyboard.
It kinda reminds me of the screen in Minority Report.
it will newer become popular for same reason as touch screen, the arm pain is unberable after 30 minutes of such imput
Cool stuff, this sorta reminds me of a patent Apple filed for a long time ago:
http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220060007222%22.PGNR.&OS=DN/20060007222&RS=DN/20060007222
@Poncho-
Funny… that was my thought too.
The day televisions come standard with built-in screen-cameras is the day all of my TVs go to the curb. (Particularly if they are fed with an inherently bi-directional conduit, like cable.)
Pookeye
Jesus therian.
I just read some old posts, going back a few years.
Try being positive. Just once.
Amazing. A *screen* that *sees*. Wow (not being sarcastic). There are many other applications as well, like videoconferencing where it doesn’t seem like you’re looking at someone else (Assuming an improvement in picture quality). And that joke about someone sticking a sheet of paper to his monitor to “photocopy” it might become a reality :D
I immediately thought of 1984 as well, but it’s a computer monitor application, not a TV application. TV is not an interactive medium so there’s no point in implementing this. Also laptops all have had an embedded webcam for a while and it seems not to have been abused.
In Soviet Russia, television watches you!
I guess no more waving my arms around, gesturing wildly, while cursing out my computer programs.
This is very impressive. It sounds simple, but there are a lot of hurdles solved by this project. For one, they’e done a really good job of getting those video cameras working well – after the diffuser there must be very little light hitting the camera, and with the coded aperture, linearity in the cameras pixels is required, something which very few cameras have, and is hard to approximate since sensitivity and noise varies across the scale. Also, with a fine pinhole array like that, the camera must be very high-res, making data processing hard – it wouldn’t surprise me if they weren’t using an fpga even for that demo.
If this tech becomes sought-after it will push manufacturers to come up with ways to make large ccd’s or similar tech (ie. the full size of the screen), at a still high resolution (the higher the res the more cool features can be implemented, like 3d scanning or webcams from any viewpoint). The amount of data this would generate could be huge (about 500 mb per frame uncompressed, and assuming 30fps, 18 gigabytes per second). Obviously there is a lot of redundancy in this data, and much of it can be eliminated without reducing the devices usefulness. Nevertheless, it still looks like displays would need special purpose hardware to process this data and return a summary to the PC/host device (rather like an optical mouse summarises it’s CCD data to X/Y movement data).
One very nice benefit of this tech is that the sensors required don’t need to be flawless – using coded apertures a large number of the pixels could be dead, and the device could still obtain good data after a bit of post-processing. This makes the large high res sensor arrays required for this tech far more plausible.
The output of this could be made “color” by setting each apature to a different colour. Lots of tough postpocessing would be needed to fit together the images though, since perspectives would be different. Alternatively, the sensors could be made R/G/B, but that decreases resolution.
maby it’s better when they use infra-red cameras in to the lcd
so they don’t have to put so much licht on the hands they just have to put some infra red licht