It looks like the world of Kinect hacks is about to get a bit more interesting.
While many of the Kinect-based projects we see use one or two units, this 3D telepresence system developed by UNC Chapel Hill student [Andrew Maimone] under the guidance of [Henry Fuchs] has them all beat.
The setup uses up to four Kinect sensors in a single endpoint, capturing images from various angles before they are processed using GPU-accelerated filters. The video captured by the cameras is processed in a series of steps, filling holes and adjusting colors to create a mesh image. Once the video streams have been processed, they are overlaid with one another to form a complete 3D image.
The result is an awesome real-time 3D rendering of the subject and surrounding room that reminds us of this papercraft costume. The 3D video can be viewed at a remote station which uses a Kinect sensor to track your eye movements, altering the video feed’s perspective accordingly. The telepresence system also offers the ability to add in non-existent objects, making it a great tool for remote technology demonstrations and the like.
Check out the video below to see a thorough walkthrough of this 3D telepresence system.
[youtube=http://www.youtube.com/watch?v=OOy-Dnr3xyU&w=470]
Holodeck 0.1 program ready?
That’s cool!
As of right now the picture quality looks like crap, but damn that idea is fucking awesome! Keep up the good work
Wow awesome!! The effect reminds me of a real-life Portal opening..
they should consider coupling the dof sensors with some better cameras. their efforts in improving the depth map look pretty good (compared to the other projects out there), but the texture quality is very disappointing with the cinect webcam.
2:08 and the credits are the real money shots. Point Of View with the cardboard cut out head says alot.
This reminds me of the living room with virtual walls made of lcd TV’s, and human tracking. http://hackaday.com/2010/04/16/virtual-windows-that-track-a-viewers-position/
When someone does THAT with a Kinect and a Big screen tv (Projector) then I’ll go out and buy a kinect. I wanna feel Like i’m in orbit around Saturn !!!
It really is time for a high resolution Kinect or something that works like that.
I think it will eventually come, but we’ll have to wait something in the range of 5 to 7 more years (unless a competitor puts some pressure on Microsoft).
@h3p0 That’s a great idea. Kinect is good at depth mapping but is not a great camera. Marry two things together and create a better, third one.
I’m impressed by how far they’ve come in a short time, so I’m relatively confident they’ll do more soon.
drop the price of the kinect by 50% and soon we can play halo without a controller. Very cool guys, keep it up.
Apparently you can use Kinect to play video games with too. Imagine that. I guess MS were a bit off the mark with their target audience.
while this is cool and all, the quality is low, the hardware cost is high and the tech for it is patented. stereoscopic cameras have significantly higher quality and are _designed_ to measure depth without having to compensate for parts of the image lacking. in short, it’s an interesting idea but not practical at all.
If it doesn’t have any actuators, it hardly counts as telepresence. Although technically impressive, this is better termed “3d-rendered video chat”, not telepresence.
I could see this having quite a bit of utility in AR applications, but you would expect AR applications to have much of the hardware that this project duplicates with software (head-tracking with accelerometers on head mounted stereo display hardware, for instance). To be fair, people using a system like this will not look like Geordie LaForge whenever they vid.
A lot of pains, minimal gains.
You tripled your video bandwidth requirements, and added computational overheads.
Probably easier just to ask the other party to turn their head.
If live TV broadcasting would use that technology, that would be rockin’ awesome
How long before this is used for porn/webcam sites?
Tropica expressed my sentimens almost exactly, the picture quality is like playstation 1 graphics, but the working concept is fricking amazing!
Admirable achievement. Once it becomes mainstream, I’m sure it will be at least as popular as today’s video calling, which nobody seems to use, according to mobile operators.
How did they simultaneously use 4 Kinects, is it a shutter based system?
“stereoscopic cameras have significantly higher quality and are _designed_ to measure depth ”
Thats not true at all. Steroscopic cameras are just meant to reproduce a 3D image for a human. All you get is two 2D images at known points. You can extract depth by looking for similiar points and triangulation, but theres many case’s where you have to completely guess. Surfaces the same colour as the behind, for example.
The Kinect, or Ladar/Time of flight, or other “true” depth systems can deal with this fine – two colour cameras would be useless, however, as there isnt enough information to work with.
Also, the hardware costs of 4 Kinects is actualy very small compared to any other hardware that could do the same thing right now.
(which probably would involve lazers), thats why your seeing the Kinect used for so many projects, its because it is pretty cheap relatively speaking.
“How did they simultaneously use 4 Kinects, is it a shutter based system?”
Good question.
You can polarize to get 2 working at once, so maybe a combination?
Wait until the next kinect sensor is released by Microsoft. Might be a software update but I know down the road there will be a hardware revision 2 that’ll perform better. Just takes time =)
Remember how much time passed from personal computers coming out until we have these advanced 3D graphics?
@Haku You’ve never seens playstation 1 graphics then.
And yes the concept is amazing.
this kind of reminds me of the 3-d video sequences seen in minority report.