Virtual Reality (VR) and actual reality often don’t mix: watch someone play a VR game without seeing what they see and you see a lot of pointless-looking flailing around. [Nerdaxic] may have found a balance that works in this flight sim setup that mixes VR and AR, though. He did this by combining the virtual cockpit controls of his fight simulator with real buttons, knobs, and dials. He uses an HTC Vive headset and a beefy PC to create the virtual side, which is mirrored with a real-world version. So, the virtual yoke is matched with a real one. The same is true of all of the controls, thanks to a home-made control panel that features all of the physical controls of a Cessna 172 Skyhawk.
[Nerdaxic] has released the plans for the project, including his 3D printable knobs for throttle and fuel/air mixture and the design for the wooden panel and assembly that holds all of the controls in the same place as they are in the real thing. He even put a fan in the system to produce a gentle breeze to enhance the feel of sticking your head out of the window — just don’t try that on a real aircraft.
Someday Elon Musk might manage to pack enough of us lowly serfs into one of his super rockets that we can actually afford a ticket to space, but until then our options for experiencing weightlessness are pretty limited. Even if you’ll settle for a ride on one of the so-called “Vomit Comet” reduced-gravity planes, you’ll have to surrender a decent chunk of change, and as the name implies, potentially your lunch as well. Is there no recourse for the hacker that wants to get a taste of the astronaut experience without a NASA-sized budget?
To construct his underwater VR headset, [spiritplumber] uses a number of off-the-shelf products. The main “Cardboard” headset itself is the common plastic style that you can probably find in the clearance section of whatever Big Box retailer is convenient for you, and the waterproof bag that holds the phone can be obtained cheaply online. You’ll also need a pair of swimmers goggles to keep water from rudely interrupting your wide-eyed wonderment. As for the custom printed parts, a frame keeps the waterproof bag from pressing against the screen while submerged, and a large spacer is required to get the phone at the appropriate distance from the operator’s eyes.
To put his creation to the test, [spiritplumber] loads up a VR rendition of NASA’s Neutral Buoyancy Laboratory, where astronauts experience a near-weightless environment underwater. All that’s left to complete the experience is a DIY scuba regulator so you can stay submerged. Though at that point we wouldn’t be surprised if a passerby confuses your DIY space simulator for an elaborate suicide attempt.
It turns out that there were a few challenges to work around and a few new problems to solve, not least of which was mapping VR controllers to control an N64 game in a sensible way. One thing that wasn’t avoidable is that the N64’s rendered world may now pop in 3D, but it still springs forth from a rectangular stage. The N64, after all, is still only rendering a world in a TV-screen-sized portion; anything outside that rectangular window doesn’t really exist, and there’s no way around it as long an emulated N64 is running the show. Still, the result is impressive, and a video demo is embedded below where you can see the effect for yourself.
[Anjul Patney] and [Qi Sun] demonstrated a fascinating new technique at NVIDIA’s GPU Technology Conference (GTC) for tricking a human into thinking a VR space is larger than it actually is. The way it works is this: when a person walks around in VR, they invariably make turns. During these turns, it’s possible to fool the person into thinking they have pivoted more or less than they have actually physically turned. With a way to manipulate perception of turns comes a way for software to gently manipulate a person’s perception of how large a virtual space is. Unlike other methods that rely on visual distortions, this method is undetectable by the viewer.
The software essentially exploits a quirk of how our eyes work. When a human’s eyes move around to look at different things, the eyeballs don’t physically glide smoothly from point to point. The eyes make frequent but unpredictable darting movements called saccades. There are a number of deeply interesting things about saccades, but the important one here is the fact that our eyes essentially go offline during saccadic movement. Our vision is perceived as a smooth and unbroken stream, but that’s a result of the brain stitching visual information into a cohesive whole, and filling in blanks without us being aware of it.
Part one of [Anjul] and [Qi]’s method is to manipulate perception of a virtual area relative to actual physical area by making a person’s pivots not a 1:1 match. In VR, it may appear one has turned more or less than one has in the real world, and in this way the software can guide the physical motion while making it appear in VR as though nothing is amiss. But by itself, this isn’t enough. To make the mismatches imperceptible, the system watches the eye for saccades and times its adjustments to occur only while they are underway. The brain ignores what happens during saccadic movement, stitches together the rest, and there you have it: a method to gently steer a human being in a way that a virtual space is larger than the physical area available.
Embedded below is a video demonstration and overview, which mentions other methods of manipulating perception of space in VR and how it avoids the pitfalls of other methods.
The browser you are reading this page in will be an exceptionally powerful piece of software, with features and APIs undreamed of by the developers of its early-1990s ancestors such as NCSA Mosaic. For all that though, it will very probably be visually a descendant of those early browsers, a window for displaying two-dimensional web pages.
Some of this may be about to change, as in recognition of the place virtual reality devices are making for themselves, Mozilla have released Firefox Reality, in their words “a new web browser designed from the ground up for stand-alone virtual and augmented reality headset“. For now it will run on Daydream and GearVR devices as a developer preview, but the intended target for the software is a future generation of hardware that has yet to be released.
Readers with long memories may remember some of the hype surrounding VR in browsers back in the 1990s, when crystal-ball-gazers who’d read about VRML would hail it as the Next Big Thing without pausing to think about whether the devices to back it up were on the market. It could be that this time the hardware will match the expectation, and maybe one day you’ll be walking around the Hackaday WrencherSpace rather than reading this in a browser. See you there!
They’ve released a video preview that disappointingly consists of a 2D browser window in a VR environment. But it’s a start.
Light Field technology is a fascinating area of Virtual Reality research that emulates the way that light behaves to make a virtual scene look more realistic. By emulating light coming from multiple angles entering the eye, the scenes look more realistic because they look closer to reality. It is rumored to be part of the technology included in the forthcoming Magic Leap headset, but it looks like Google is trying to steal some of their thunder. The VR research arm of the search giant has released a VR app called Welcome to Light Fields that uses a similar technique on existing VR headsets, such as those from Oculus and Microsoft.
A simple way to integrate physical feedback into a virtual experience is to use a fan to blow air at the user. This idea has been done before, and the fans are usually the easy part. [Paige Pruitt] and [Sean Spielberg] put a twist on things in their (now-canceled) Kickstarter campaign called ZephVR, which featured two small fans mounted onto a VR headset. The bulk of their work was in the software, which watches the audio signal for recognizable “wind” sounds, and uses those to turn on one or both fans in response.
The benefit of using software to trigger fans based on audio cues is that the whole system works independently of everything else, with no need for developers and software to build in support for your project, or to use other middleware. Unfortunately the downside is that the results are only as good as the ability of software to pick the right sounds and act on them. Embedded below is a short video showing a test in action.