Using the system is as simple as holding up a green square of cardboard. Viewing the world through an old camcorder, [Julie’s] project detects and tracks the green square. It then adds a 3D image of Cornell’s McGraw Tower on top of the green. The tower moves with the cardboard, appearing to be there. [Julie] injected a bit of humor into the project through the option of substituting the tower for an image of her professor, [Bruce Land].
[Julie] started with an NTSC video signal. The video is captured by a DE2-115 board with an Altera Cyclone IV FPGA. Once the signal was inside the FPGA, [Julie’s] code performs a median filter. A color detector finds an area of green pixels which are passed to a corner follower and corner median filter. The tower or Bruce images are loaded from ROM and overlaid on the video stream, which is then output via VGA.
The amazing part is that there is no microprocessor involved in any of the processing. Logic and state machines control the show. Great work [Julie], we hope [Bruce] gives you an A!
No, that isn’t a scene from a horror movie up there, it’s [Oliver Kreylos’] avatar in a 3D office environment. If he looks a bit strange, it’s because he’s wearing an Oculus Rift, and his image is being stitched together from 3 Microsoft Kinect cameras.
[Oliver] has created a 3D environment which is incredibly realistic, at least to the wearer. He believes the secret is in the low latency of the entire system. When coupled with a good 3D environment, like the office shown above, the mind is tricked into believing it is really in the room. [Oliver] mentions that he finds himself subconsciously moving to avoid bumping into a table leg that he knows isn’t there. In [Oliver’s] words, “It circumnavigates the uncanny valley“.
Instead of pulling skeleton data from the 3 Kinect cameras, [Oliver] is using video and depth data. He’s stitching and processing this data on an i7 Linux box with an Nvidia Geforce GTX 770 video card. Powerful hardware for sure, but not the cutting edge monster rig one might expect. [Oliver] also documented his software stack. He’s using Vrui VR Toolkit, the Kinect 3D Video Capture Project, and the Collaboration Infrastructure.
We can’t wait to see what [Oliver] does when he gets his hands on the Kinect One (and some good Linux drivers for it).
[AlexPewPew] tipped us off on some interesting virtual reality work going on at the Swiss Federal Institute of Technology in Zurich. Mapping a user’s head movement to match the images shown in a head mounted display is something the Oculus Rift is very good at. But in order to walk and move around freely in that virtual environment requires completely different hardware. We’ve seen some ingenious setups before, but nothing as efficient as this.
In the video above, they have put sheets of bar-coded paper on the ceiling in a grid pattern. A camera that mounts on the users head looks up at the grid of papers and gets the user’s location. The neatest part though, is how they are fitting a large virtual space into a small room. As the user walks down a straight virtual path, software is slowly making the actual path in the small room curve. The end result is the user walks in circles in the small room, thinking he or she is exploring a much larger space. Neat stuff!
If you have a head mounted display lying around, and can’t think of anything to enter into The Hackaday Prize contest, this would be a great concept to work on. What are you waiting for…get hacking!
If you’re a gamer, lag is one of your worst enemies. But what would it be like if you experienced lag in real life? Imagine how frustrating that would be!
Introducing Living With Lag — a cute experiment put on by an internet provider called Ume. Using an Oculus Rift development kit, a Raspberry Pi, noise cancelling headphones and a webcam, Ume’s thrown together a fun social experiment. The webcam captures both audio and video and repeats it to the Oculus Rift via the Pi at a variable delay to show the effects of slow internet speeds.
They attempt four different scenarios. Ping pong is pretty much impossible. Dance class is just embarrassing. And attempting to cook or eat is absolutely hilarious. They even try bowling, which also proves more difficult than you could imagine!
It’s a project by Bristol Interaction and Graphics group of the University of Bristol, and it’s an interesting twist on 3D projection. They’ve created what they call the MisTable which features a smoke machine, “smoke screens”, and three projectors. What it results in is an interactive table for two people. The tabletop surface is a display, as is the see through fog in front of each person (the “fog screens”).
While it is fairly easy to understand and explain, there’s a handy diagram after the following break showing how the system works. Our question is, when are one of you guys or gals going to try making one?
Facebook has agreed to purchase Oculus VR. The press values the deal at about $2 Billion USD in cash and stock. This is great news for Oculus’ investors. The rest of the world has a decidedly different opinion. [Notch], the outspoken creator of Minecraft, was quick to tweet that a possible rift port has now been canceled, as Facebook creeps him out. He followed this up with a blog post.
I did not chip in ten grand to seed a first investment round to build value for a Facebook acquisition.
Here at Hackaday, we’ve been waiting a long time for affordable virtual reality. We’ve followed Oculus since the early days, all the way up through the recent open source hardware release of their latency tester. Our early opinion on the buyout is not very positive. Facebook isn’t exactly known for contributions to open source software or hardware, nor are they held in high regard for standardization in their games API. Only time will tell what this deal really means for the Rift.
The news isn’t all dark though. While Oculus VR has been a major catalyst for virtual reality displays, there are other players. We’ve got our eggs in the castAR basket. [Jeri, Rick] and the rest of the Technical Illusions crew have been producing some great demos while preparing CastAR for manufacture. Sony is also preparing Project Morpheus. The VR ball is rolling. We just hope it keeps on rolling – right into our living rooms.
Remember the days when the future was console cowboys running around cyberspace trying to fry each other’s brains out? MIT Media Lab remembers too. They have a class called MAS S65: Science Fiction to Science Fabrication in which students are trying to create hardware inspired by technology imagined in the works of legendary Speculative Fiction writers such as William Gibson, Neal Stephenson and many others. They happened to be at SXSW this year showing off some of the projects their students have been working on. Since we were around, we thought we should pay them a little visit. Fifteen minutes later it was clear why working at Media Lab is a dream for so many hackers/makers out there.
Jon Ferguson from Media Lab showed us a prototype of a game called Case and Molly, inspired by scenes in Neuromancer in which Case helps Molly navigate, by observing the world through vision-enhancing lenses sealed in her eye sockets. OK, they haven’t really build surgically-attached internet-connected lenses (yet.. we’re certain[Ben Krasnow] is working on it), but they have built a very cool snap-on 3D vision mechanism that attaches to the built-in iPhone camera. Add a little bit of live video streaming, a person with Oculus Rift and a game controller and you can party like it’s 1984.
Another interesting project is called “Mandala : I am building E14” and it uses data collected from a sensor network in MIT E14 in order to provide a view of the universe from the standpoint of a single building. It tries to address the old “what if buildings could talk?” question by visualizing the paths of people walking around the building and providing an overall sense of activity in different areas. It is also a pretty good demonstration of all the creepy things that are yet to be built using all the ‘connected devices’ coming our way.
It gets better. The Sensory Fiction project is a special book that comes with a vest which enhances the reader’s experience by providing stimulation that causes the reader to experience the same kind of physiological emotions as the characters in the book. The wearable that you have to put on supports a whole bunch of outputs: light, sound, temperature, pressure and vibration that can influence your heart rate. It is very easy to imagine so many potential ‘creative’ abuses of such a device.
Another Neuromancer-inspired piece, called LIMBO (Limbs In Motion By Others) allows synchronization of hand gestures between multiple ‘users’ over a network using a special electric muscle stimulation rig. The result is a sort of ‘meat puppet’ – one person’s hand being forced to match movement of the other. Devious ideas aside, it has great potential in helping paraplegic control their muscle movement using eye tracking.
Finally, a more cheerful project called BubbleSynth demonstrates an open computer vision/sound synthesis platform using physical processes as input to granular synthesis. The current installation is based on a bubble generating machine and motion tracking as a trigger for a modular synthesizer resulting in beautiful ambient sounds. The audio part of the platform is based on SuperCollider and is completely customizable. The next iteration of the project will be using movement of a species of bacteria in order to generate the music. Why struggle learning how to play an instrument? We’ll get bacteria do all the work.
Feel like building something similar? Hackaday’s current Sci-Fi contest is a perfect excuse. Need inspiration? Check out the syllabus for the MIT SciFi2SciFab class!