It looks like the world of Kinect hacks is about to get a bit more interesting.
While many of the Kinect-based projects we see use one or two units, this 3D telepresence system developed by UNC Chapel Hill student [Andrew Maimone] under the guidance of [Henry Fuchs] has them all beat.
The setup uses up to four Kinect sensors in a single endpoint, capturing images from various angles before they are processed using GPU-accelerated filters. The video captured by the cameras is processed in a series of steps, filling holes and adjusting colors to create a mesh image. Once the video streams have been processed, they are overlaid with one another to form a complete 3D image.
The result is an awesome real-time 3D rendering of the subject and surrounding room that reminds us of this papercraft costume. The 3D video can be viewed at a remote station which uses a Kinect sensor to track your eye movements, altering the video feed’s perspective accordingly. The telepresence system also offers the ability to add in non-existent objects, making it a great tool for remote technology demonstrations and the like.
Check out the video below to see a thorough walkthrough of this 3D telepresence system.