[Anjul Patney] and [Qi Sun] demonstrated a fascinating new technique at NVIDIA’s GPU Technology Conference (GTC) for tricking a human into thinking a VR space is larger than it actually is. The way it works is this: when a person walks around in VR, they invariably make turns. During these turns, it’s possible to fool the person into thinking they have pivoted more or less than they have actually physically turned. With a way to manipulate perception of turns comes a way for software to gently manipulate a person’s perception of how large a virtual space is. Unlike other methods that rely on visual distortions, this method is undetectable by the viewer.
The software essentially exploits a quirk of how our eyes work. When a human’s eyes move around to look at different things, the eyeballs don’t physically glide smoothly from point to point. The eyes make frequent but unpredictable darting movements called saccades. There are a number of deeply interesting things about saccades, but the important one here is the fact that our eyes essentially go offline during saccadic movement. Our vision is perceived as a smooth and unbroken stream, but that’s a result of the brain stitching visual information into a cohesive whole, and filling in blanks without us being aware of it.
Part one of [Anjul] and [Qi]’s method is to manipulate perception of a virtual area relative to actual physical area by making a person’s pivots not a 1:1 match. In VR, it may appear one has turned more or less than one has in the real world, and in this way the software can guide the physical motion while making it appear in VR as though nothing is amiss. But by itself, this isn’t enough. To make the mismatches imperceptible, the system watches the eye for saccades and times its adjustments to occur only while they are underway. The brain ignores what happens during saccadic movement, stitches together the rest, and there you have it: a method to gently steer a human being in a way that a virtual space is larger than the physical area available.
Embedded below is a video demonstration and overview, which mentions other methods of manipulating perception of space in VR and how it avoids the pitfalls of other methods.
Built-in Leap Motion camera for precise hand tracking
Yes, you read that last line correctly. The North Star will be open source hardware. Leap Motion is planning to drop all the hardware information next week.
Now that we’ve got you excited, let’s mention what the North Star is not — it’s not a consumer device. Leap Motion’s idea here was to create a platform for developing Augmented Reality experiences — the user interface and interaction aspects. To that end, they built the best head-mounted display they could on a budget. The company started with standard 5.5″ cell phone displays, which made for an incredibly high resolution but low framerate (50 Hz) device. It was also large and completely unpractical.
The current iteration of the North Star uses much smaller displays, which results in a higher frame rate and a better overall experience. The secret sauce seems to be Leap’s use of ellipsoidal mirrors to achieve a large FOV while maintaining focus.
We’re excited, but also a bit wary of the $100 price point — Leap Motion is quick to note that the price is “in volume”. They also mention using diamond tipped tooling in a vibration isolated lathe to grind the mirrors down. If Leap hasn’t invested in some injection molding, those parts are going to make the whole thing expensive. Keep your eyes on the blog here for more information as soon as we have it!
The browser you are reading this page in will be an exceptionally powerful piece of software, with features and APIs undreamed of by the developers of its early-1990s ancestors such as NCSA Mosaic. For all that though, it will very probably be visually a descendant of those early browsers, a window for displaying two-dimensional web pages.
Some of this may be about to change, as in recognition of the place virtual reality devices are making for themselves, Mozilla have released Firefox Reality, in their words “a new web browser designed from the ground up for stand-alone virtual and augmented reality headset“. For now it will run on Daydream and GearVR devices as a developer preview, but the intended target for the software is a future generation of hardware that has yet to be released.
Readers with long memories may remember some of the hype surrounding VR in browsers back in the 1990s, when crystal-ball-gazers who’d read about VRML would hail it as the Next Big Thing without pausing to think about whether the devices to back it up were on the market. It could be that this time the hardware will match the expectation, and maybe one day you’ll be walking around the Hackaday WrencherSpace rather than reading this in a browser. See you there!
They’ve released a video preview that disappointingly consists of a 2D browser window in a VR environment. But it’s a start.
Looking for ideas for your haptics projects? [Destin] of the Smarter Every Day YouTube channel got a tour from the engineers at HaptX of their full-featured VR glove with amazing haptic feedback both with a very fine, 120-point sense of touch, force feedback for each finger, temperature, and motion tracking.
In hacks, we usually stimulate the sense of touch by vibrating something against the skin. With this glove, they use pneumatics to press against the skin. A single fingertip has multiple roughly 1/8 inch air bladders in contact with it. Each bladder is separately pneumatically controlled by pushing air into it. The air pressure can vary continuously so that the bladders can push lightly, harder or anywhere in between. The glove has 120 of these bladders spread out over the fingers and the palm. Unfortunately, they didn’t allow him to see the valves controlling the pneumatics, but if you are looking for a low-frequency, low-cost way to actuate valves you might consider using syringes. The engineers do tell [Destin] that if your VR scene shows something pressing against your virtual finger, as long as your haptics push against your real finger within around 1/8th of a second, your brain won’t notice the delay.
They’re also working on using hot and cold fluids to give a sense of temperature within a glove. This is demonstrated in the first video below when [Destin] feels heat while a dragon in the VR world breathes fire on his hand. Fortunately one of the engineers mentions that our sense of temperature is one of the slower ones, it can handle longer latencies than even touch. We can see implementing this in a hack using a bladder pressing against the skin while tubes circulate different temperature fluids through it. But maybe there’s a way to do it electrically, possibly with thermoelectric modules as is done with this drinks cooler? Though safety issues might prohibit that.
Other features mentioned are force feedback for each finger, and their custom motion tracking which uses both magnetic and optical means to track fingertips. But we’ll leave the rest to the videos below. The first is the technical tour and the second is the glove being used in the VR world.
We love a bit of reverse engineering here at Hackaday, figuring out how a device works from the way it communicates with the world. This project from [Jim Yang] is a great example of this: he reverse-engineered the Samsung Gear VR controller that accompanies the Gear VR add-on for their phones. By digging into the APK that links the device to the phone, he was able to figure out the details of the Bluetooth connection that the app uses to connect to the device. Specifically, he was able to find the commands that were used to get the device to send data, and was able to read this data to determine the state of the device. He was then able to use this to create his own web app to use this data.
Like a lot of 16-year-olds, [Maxime Coutté] wanted an Oculus Rift. Unlike a lot of 16-year-olds, [Maxime] and friends [Gabriel] and [Jonas] built one themselves for about a hundred bucks and posted it on GitHub. We’ll admit that at 16 we weren’t throwing around words like quaternions and antiderivatives, so we were duly impressed.
Before you assume this is just a box to put a phone in like a Google Cardboard, take a look at the bill of materials: an Arduino Due, a 2K LCD screen, a Fresnel lens, and an accelerometer/gyro. The team notes that the screen is what will push the price unpredictably, but they got by for about a hundred euro. At the current exchange rate, if you add up all the parts, they went a little over $100, but they were still under $150 assuming you have a 3D printer to print the mechanical parts.
A simple way to integrate physical feedback into a virtual experience is to use a fan to blow air at the user. This idea has been done before, and the fans are usually the easy part. [Paige Pruitt] and [Sean Spielberg] put a twist on things in their (now-canceled) Kickstarter campaign called ZephVR, which featured two small fans mounted onto a VR headset. The bulk of their work was in the software, which watches the audio signal for recognizable “wind” sounds, and uses those to turn on one or both fans in response.
The benefit of using software to trigger fans based on audio cues is that the whole system works independently of everything else, with no need for developers and software to build in support for your project, or to use other middleware. Unfortunately the downside is that the results are only as good as the ability of software to pick the right sounds and act on them. Embedded below is a short video showing a test in action.