[William Steptoe] is a post-doctoral research associate at University College London. This means he gets to play with some really cool hardware. His most recent project is an augmented reality update to the Oculus Rift. This is much more than hacking a pair of cameras on the Rift though. [William] has created an entire AR/VR user interface, complete with dockable web browser screens. He started with a stock Rift, and a room decked out with a professional motion capture system. The Rift was made wireless with the addition of an ASUS Wavi and a laptop battery system. [William] found that the wireless link added no appreciable latency to the Rift. To move into the realm of augmented reality, [William] added a pair of Logitech C310 cameras. The C310 lens’ field of view was a bit narrow for what he needed, so lenses from a Genius WideCam F100 were swapped in. The Logitech cameras were stripped down to the board level, and mounted on 3D printed brackets that clip onto the Rift’s display. Shapelock was added to the mounts to allow the convergence of the cameras to be easily set.
Stereo camera calibration is a difficult and processor intensive process. Add to that multiple tracking systems (both the 6DOF head tracking on the Rift, and the video tracker built-in to the room) and you’ve got quite a difficult computational process. [William] found that he needed to use a Unity shader running on his PC’s graphics card to get the system to operate in real-time. The results are quite stunning. We didn’t have a Rift handy to view the 3D portions of [William’s] video. However, the sense of presence in the room still showed through. Videos like this make us excited for the future of augmented reality applications, with the Rift, the upcoming castAR, and with other systems.
[Thanks Scott!]
Sweet, Now all I need is to get me a bat-cape and climb a really tall bridge and Im set!!!
Thumbs up if you watched the 3D parts of the video with your eyes splayed like you watch those magic pictures that appear like noise.
Stereoscopic images :)
That’s a generic term. The specific term for the magic pictures seems to be “autostereogram”
Thumbs Up!
Unfortunately it’s inverted so your eyes bleed.
No, it’s exactly the right way around. Left for left and right for right eye.
The problem is, you have to be Sartre to view it entirely comfortably. Still, it’s possible with a bit of practice.
I’ll be watching this tonight after work with my own OR ;)
Yep, setting the video to full screen, crossing your eyes until you see 3 images focusing on the central one gives a pretty convincing 3d effect.
I always do this on rift videos :)
I love these showcases. Even if not all of them makes the cut for actual applications.
There is no actual converging parallax
Lol when he loaded the avatar I didn’t know if he was going to pay her or beat her up /jk
Reminds me of the SGI Onyx we used to play on (although it was more VR than AR). It was like a 3D Katamari looking world. This was back in 2001.
How much actual use the OR will garner will depend heavily on how useful AR and VR actually are in the field they are trying to sell to. Nice to see that things are pluggin along at a nice clip :) Keep up the good work.
And then….
http://imgcdn.nrelate.com/image_cache/joethewebguy.net/b89ff8598e6daff017a74a06937470fb_thumb_more.jpg
please enable fullscreen youtube videos!!!
Interesting concept but useless in real life. This setup is basically stripping out the phase information from the wavefield arriving to your retina, the lenses of your eyes cannot properly focus without that information. Basically you will see the world in pseudo-3D – you won’t be able to focus objects located in different depth-planes in the real world, you are limited to the focal distance of the lenses of the stereo camera.
This guy should be using his skills and resources to overlay Virtual Reality information directly into your retina using optical mean, without disrupting the realworld->retina optical path.
I get the feeling that these kinds of experiments would be better with CastAR.
Just sayin.
His PhD research is in avatar-mediated telecommunication in CAVE-like virtual reality systems. He is not an optical engineer.
The occulus rift is good enough for game developers, I think it would work fine for AR research. Once plenoptic cameras and lightfield displays are available as commercial off the shelf hardware this research will translate over easily.
Looks like there’s still some programming to be done with avatar being farther away than the guys arm, but that is amazing. a few more steps down the road you should be able to dance interactively with her, I don’t know how he’s going to accomplish that, I wonder if a color area could be designated as X distance from the oculus, sort of like a green screen.