Thus far, the vast majority of human photographic output has been two-dimensional. 3D displays have come and gone in various forms over the years, but as technology progresses, we’re beginning to see more and more immersive display technologies. Of course, to use these displays requires content, and capturing that content in three dimensions requires special tools and techniques. Kim Pimmel came down to Hackaday Superconference to give us a talk on the current state of the art in advanced AR and VR camera technologies.
Kim has plenty of experience with advanced displays, with an impressive resume in the field. Having worked on Microsoft’s Holo Lens, he now leads Adobe’s Aero project, an AR app aimed at creatives. Kim’s journey began at a young age, first experimenting with his family’s Yashica 35mm camera, where he discovered a love for capturing images. Over the years, he experimented with a wide variety of gear, receiving a Canon DSLR from his wife as a gift, and later tinkering with the Stereorealist 35mm 3D camera. The latter led to Kim’s growing obsession with three-dimensional capture techniques.
Through his work in the field of AR and VR displays, Kim became familiar with the combination of the Ricoh Theta S 360 degree camera and the Oculus Rift headset. This allowed users to essentially sit inside a photo sphere, and see the image around them in three dimensions. While this was compelling, [Kim] noted that a lot of 360 degree content has issues with framing. There’s no way to guide the observer towards the part of the image you want them to see.
Moving your hand makes this hexapod dance like a stringless marionette. Okay, so there’s obviously one string which is actually a wire but you know what we mean. The device on the floor is a Leap Motion sensor which is monitoring [Queron Williams’] hand gestures. This is done using a Processing library which leverages the Leap Motion API.
Right now the hand signals only affect pitch, roll, and yaw of the hexapod’s body. But [Queron] does plan to add support for monitoring both hands to add more control. We look at the demo after the break and think this is getting pretty close to the manipulations shown by [Tom Cruise] in Minority Report. Add Google Glass for a Heads Up Display and you could have auxiliary controls rendered on the periphery.
While you’re looking at [Queron’s] project post click on his ‘hexapod’ tag to catch a glimpse the build process for the robot.
This home automation project lets you flap your arms to turn things on and off. [Toon] and [Jiang] have been working on the concept as part of their Master’s thesis at University. It uses a 3D camera with some custom software to pick up your gestures. What we really like is the laser pointer which provides feedback. You can see a red dot on the wall which followers where ever he points. Each controllable device has a special area to which the dot will snap when the user is pointing close to it. By raising his other arm the selected object can be turned on or off.
Take a look at the two videos after the break to get a good overview of the concept. We’d love to see some type of laser projector used instead of just a single dot. This way you could have a pop-up menu system. Imagine getting a virtual remote control on the wall for skipping to the next audio track, adjusting the volume, or changing the TV channel.
[Steven] needed to come up with a project for the Computer Vision course he was taking, so he decided to try building a portable 3D camera. His goal was to build a Kinect-like 3D scanner, though his solution is better suited for very detailed still scenes, while the Kinect performs shallow, less detailed scans of dynamic scenes.
The device uses a TI DLP Pico projector for displaying the structured light patterns, while a cheap VGA camera is tasked with taking snapshots of the scene he is capturing. The data is fed into a Beagleboard, where OpenCV is used to create point clouds of the objects he is scanning. That data is then handed off to Meshlab, where the point clouds can be combined and tweaked to create the final 3D image.
As [Steven] points out, the resultant images are pretty impressive considering his rig is completely portable and that it only uses an HVGA projector with a VGA camera. He says that someone using higher resolution equipment would certainly be able to generate fantastically detailed 3D images with ease.
Be sure to check out his page for more details on the project, as well as links to the code he uses to put these images together.
[fotoopa]’s build is based around two cameras – a Nikon D200 and D300. These cameras are pointed towards the subject insect with two mirrors allowing for a nice stereo separation for 3D images. Of course, the trouble is snapping the picture when an insect flies in front of the rig.
For shutter control, [fotoopa] used two IR laser pointers pointed where the two cameras converge. A photodiode in a lens above the rig detects this IR dot and triggers the shutters. To speed up the horribly slow 50ms shutters on the Nikons, a high-speed shutter was added so the image is captured within 3ms.
[fotoopa]’s 2011 rig took things down a notch; this year he’s only working with one camera. Even though he didn’t get any 3D images this year, the skill in making such an awesome rig is impressive.