Everything You Probably Didn’t Know About FOV In HMDs

VR headsets have been seeing new life for a few years now, and when it comes to head-mounted displays, the field of view (FOV) is one of the specs everyone’s keen to discover. Valve Software have published a highly technical yet accessibly-presented document that explains why Field of View (FOV) is a complex thing when it pertains to head-mounted displays. FOV is relatively simple when it comes to things such as cameras, but it gets much more complicated and hard to define or measure easily when it comes to using lenses to put images right up next to eyeballs.

Simulation of how FOV can be affected by eye relief [Source: Valve Software]
The document goes into some useful detail about head-mounted displays in general, the design trade-offs, and naturally talks about the brand-new Valve Index VR headset in particular. The Index uses proprietary lenses combined with a slight outward cant to each eye’s display, and they explain precisely what benefits are gained from each design point. Eye relief (distance from eye to lens), lens shape and mounting (limiting how close the eye can physically get), and adjustability (because faces and eyes come in different configurations) all have a role to play. It’s a situation where every millimeter matters.

If there’s one main point Valve is trying to make with this document, it’s summed up as “it’s really hard to use a single number to effectively describe the field of view of an HMD.” They plan to publish additional information on the topics of modding as well as optics, so keep an eye out on their Valve Index Deep Dive publication list.

Valve’s VR efforts remain interesting from a hacking perspective, and as an organization they seem mindful of keen interest in being able to modify and extend their products. The Vive Tracker was self-contained and had an accessible hardware pinout for the express purpose of making hacking easier.  We also took a look at Valve’s AR and VR prototypes, which give some insight into how and why they chose the directions they did.

Open Source Headset With Inside-Out Tracking, Video Passthrough

The folks behind the Atmos Extended Reality (XR) headset want to provide improved accessibility with an open ecosystem, and they aim to do it with a WebVR-capable headset design that is self-contained, 3D-printable, and open-sourced. Their immediate goal is to release a development kit, then refine the design for a wider release.

An early prototype of the open source Atmos Extended Reality headset.

The front of the headset has a camera-based tracking board to provide all the modern goodies like inside-out head and hand tracking as well as the ability to pass through video. The design also provides for a variety of interface methods such as eye tracking and 6 DoF controllers.

With all that, the headset gives users maximum flexibility to experiment with and create different applications while working to keep development simple. A short video showing off the modular design of the HMD and optical assembly is embedded below.

Extended Reality (XR) has emerged as a catch-all term to cover broad combinations of real and virtual elements. On one end of the spectrum are completely virtual elements such as in virtual reality (VR), and towards the other end of the spectrum are things like augmented reality (AR) in which virtual elements are integrated with real ones in varying ratios. With the ability to sense the real world and pass through video from the cameras, developers can choose to integrate as much or as little as they wish.

Terms like XR are a sign that the whole scene is still rapidly changing and it’s fascinating to see how development in this area is still within reach of small developers and individual hackers. The Atmos DK 1 developer kit aims to be released sometime in July, so anyone interested in getting in on the ground floor should read up on how to get involved with the project, which currently points people to their Twitter account (@atmosxr) and invites developers to their Discord server. You can also follow along on their newly published Hackaday.io page.

Continue reading “Open Source Headset With Inside-Out Tracking, Video Passthrough”

Virtual Reality For Alzheimer’s Detection

You may think of Alzheimer’s as a disease of the elderly, but the truth is people who suffer from it have had it for years — sometimes decades — before they notice. Early detection can help doctors minimize the impact the condition has on your brain, so there’s starting to be an emphasis on testing middle-aged adults for the earliest signs of the illness. It turns out that one of the first noticeable symptoms is a decline in your ability to navigate. [Dennis Chan] at Cambridge Biomedical Research Centre and his team are now using virtual reality to determine how well people can navigate as a way to assess Alzheimer’s earlier than is possible with other techniques.

Current tests mostly measure your ability to remember things, but by the time that’s a problem, things have often progressed. The test has the subject walk to different cones and remember their locations, and has already proven more effective than the standard test.

Continue reading “Virtual Reality For Alzheimer’s Detection”

Simple Sensor Provides Detailed Motion Capture For VR Hands

Consider the complexity of the appendages sitting at the end of your arms. The human hands contain over a quarter of the entire complement of bones in the body, use dozens of muscles both in the hand itself and extending up the forearm, and are capable of almost infinite variance in the movements they can create. They are exquisite machines.

And yet when it comes to virtual reality, most simulations treat the hands like inert blobs. That may be partly due to their complexity; doing motion capture from so many joints can be computationally challenging. But this pressure-sensitive hand motion capture rig aims to change that. The product of an undergraduate project by [Leslie], [Hunter], and [Matthew], the idea was to provide an economical and effective way to capture gestures for virtual reality simulators, which generally focus on capturing large motions from the whole body.

The sensor consists of a sandwich of polyurethane foam with strain gauge sensors embedded within. The user slips his or her hand into the foam and rests the fingers on the sensors. A Teensy and twenty lines of code translate finger motions within the sandwich into five axes of joystick movement, which is then sent to Unreal Engine, where finger motions were translated to a 3D-model of a hand to play a VR game of “Rock, Paper, Scissors.”

[Leslie] and her colleagues have a way to go on this; testers complained that the flat hand posture was unnatural, and that the foam heated things up quickly. Maybe something more along the lines of these gesture-capturing gloves would work?

Leap Motion’s Project North Star Gets Hardware

It’s been more than a year since we first heard about Leap Motion’s new, Open Source augmented reality headset. The first time around, we were surprised: the headset featured dual 1600×1440 LCDs, 120 Hz refresh rate, 100 degree FOV, and the entire thing would cost under $100 (in volume), with everything, from firmware to mechanical design released under Open licenses. Needless to say, that’s easier said than done. Now it seems Leap Motion is releasing files for various components and a full-scale release might be coming sooner than we think.

Leap Motion first made a name for themselves with the Leap Motion sensor, a sort of mini-Kinect that only worked with hands and arms. Yes, we’re perfectly aware that sounds dumb, but the results were impressive: everything turned into a touchscreen display, you could draw with your fingers, and control robots with your hands. If you mount one of these sensors to your forehead, and reflect a few phone screens onto your retinas, you have the makings of a stereoscopic AR headset that tracks the movement of your hands. This is an over-simplified description, but conceptually, that’s what Project North Star is.

The files released now include STLs of parts that can be 3D printed on any filament printer, files for the electronics that drive the backlight and receive video from a laptop, and even software for doing actual Augmented Reality stuff in Unity. It’s not a complete project ready for prime time, but it’s a far cry from the simple spec sheet full of promises we saw in the middle of last year.

Screen Shake In VR, Minus The Throwing Up

In first-person games, an effective way to heighten immersion is to give the player a sense of impact and force by figuratively shaking the camera. That’s a tried and true practice for FPS games played on a monitor, but to [Zulubo]’s knowledge, no one has implemented traditional screen shake in a VR title because it would be a sure way to trigger motion sickness. Unsatisfied with that limitation, some clever experimentation led [Zulubo] to a method of doing screen shake in VR that doesn’t cause any of the usual problems.

Screen shake doesn’t translate well to VR because the traditional method is to shake the player’s entire view. This works fine when viewed on a monitor, but in VR the brain interprets the visual cue as evidence that one’s head and eyeballs are physically shaking while the vestibular system is reporting nothing of the sort. This kind of sensory mismatch leads to motion sickness in most people.

The key to getting the essence of a screen shake without any of the motion sickness baggage turned out to be a mix of two things. First, the shake is restricted to peripheral vision only. Second, it is restricted to an “in and out” motion, with no tilting or twisting. The result is a conveyance of concussion and impact that doesn’t rely on shaking the player’s view, at least not in a way that leads to motion sickness. It’s the product of some clever experimentation to solve a problem, and freely downloadable for use by anyone who may be interested.

Speaking of fooling one’s senses in VR environments, here is a fascinating method of simulating zero gravity: waterproof the VR headset and go underwater.

[via Reddit]

Three Dimensions: What Does That Really Mean?

The holy grail of display technology is to replicate what you see in the real world. This means video playback in 3D — but when it comes to displays, what is 3D anyway?

You don’t need me to tell you how far away we are from succeeding in replicating real life in a video display. Despite all the hype, there are only a couple of different approaches to faking those three-dimensions. Let’s take a look at what they are, and why they can call it 3D, but they’re not fooling us into believing we’re seeing real life… yet.

Continue reading “Three Dimensions: What Does That Really Mean?”