Amazing 3d Telepresence System

encumberance_free_telepresence_kinect

It looks like the world of Kinect hacks is about to get a bit more interesting.

While many of the Kinect-based projects we see use one or two units, this 3D telepresence system developed by UNC Chapel Hill student [Andrew Maimone] under the guidance of [Henry Fuchs] has them all beat.

The setup uses up to four Kinect sensors in a single endpoint, capturing images from various angles before they are processed using GPU-accelerated filters. The video captured by the cameras is processed in a series of steps, filling holes and adjusting colors to create a mesh image. Once the video streams have been processed, they are overlaid with one another to form a complete 3D image.

The result is an awesome real-time 3D rendering of the subject and surrounding room that reminds us of this papercraft costume. The 3D video can be viewed at a remote station which uses a Kinect sensor to track your eye movements, altering the video feed’s perspective accordingly. The telepresence system also offers the ability to add in non-existent objects, making it a great tool for remote technology demonstrations and the like.

Check out the video below to see a thorough walkthrough of this 3D telepresence system.

[youtube=http://www.youtube.com/watch?v=OOy-Dnr3xyU&w=470]

22 thoughts on “Amazing 3d Telepresence System

  1. they should consider coupling the dof sensors with some better cameras. their efforts in improving the depth map look pretty good (compared to the other projects out there), but the texture quality is very disappointing with the cinect webcam.

  2. It really is time for a high resolution Kinect or something that works like that.

    I think it will eventually come, but we’ll have to wait something in the range of 5 to 7 more years (unless a competitor puts some pressure on Microsoft).

  3. @h3p0 That’s a great idea. Kinect is good at depth mapping but is not a great camera. Marry two things together and create a better, third one.

    I’m impressed by how far they’ve come in a short time, so I’m relatively confident they’ll do more soon.

  4. while this is cool and all, the quality is low, the hardware cost is high and the tech for it is patented. stereoscopic cameras have significantly higher quality and are _designed_ to measure depth without having to compensate for parts of the image lacking. in short, it’s an interesting idea but not practical at all.

  5. If it doesn’t have any actuators, it hardly counts as telepresence. Although technically impressive, this is better termed “3d-rendered video chat”, not telepresence.

    I could see this having quite a bit of utility in AR applications, but you would expect AR applications to have much of the hardware that this project duplicates with software (head-tracking with accelerometers on head mounted stereo display hardware, for instance). To be fair, people using a system like this will not look like Geordie LaForge whenever they vid.

  6. A lot of pains, minimal gains.
    You tripled your video bandwidth requirements, and added computational overheads.
    Probably easier just to ask the other party to turn their head.

  7. Admirable achievement. Once it becomes mainstream, I’m sure it will be at least as popular as today’s video calling, which nobody seems to use, according to mobile operators.

  8. “stereoscopic cameras have significantly higher quality and are _designed_ to measure depth ”

    Thats not true at all. Steroscopic cameras are just meant to reproduce a 3D image for a human. All you get is two 2D images at known points. You can extract depth by looking for similiar points and triangulation, but theres many case’s where you have to completely guess. Surfaces the same colour as the behind, for example.

    The Kinect, or Ladar/Time of flight, or other “true” depth systems can deal with this fine – two colour cameras would be useless, however, as there isnt enough information to work with.

    Also, the hardware costs of 4 Kinects is actualy very small compared to any other hardware that could do the same thing right now.
    (which probably would involve lazers), thats why your seeing the Kinect used for so many projects, its because it is pretty cheap relatively speaking.

    “How did they simultaneously use 4 Kinects, is it a shutter based system?”

    Good question.
    You can polarize to get 2 working at once, so maybe a combination?

  9. Wait until the next kinect sensor is released by Microsoft. Might be a software update but I know down the road there will be a hardware revision 2 that’ll perform better. Just takes time =)

    Remember how much time passed from personal computers coming out until we have these advanced 3D graphics?

Leave a Reply to jeicrashCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.