Redirected Walking in VR done via Exploit of Eyeballs

[Anjul Patney] and [Qi Sun] demonstrated a fascinating new technique at NVIDIA’s GPU Technology Conference (GTC) for tricking a human into thinking a VR space is larger than it actually is. The way it works is this: when a person walks around in VR, they invariably make turns. During these turns, it’s possible to fool the person into thinking they have pivoted more or less than they have actually physically turned. With a way to manipulate perception of turns comes a way for software to gently manipulate a person’s perception of how large a virtual space is. Unlike other methods that rely on visual distortions, this method is undetectable by the viewer.

Saccadic movements

The software essentially exploits a quirk of how our eyes work. When a human’s eyes move around to look at different things, the eyeballs don’t physically glide smoothly from point to point. The eyes make frequent but unpredictable darting movements called saccades. There are a number of deeply interesting things about saccades, but the important one here is the fact that our eyes essentially go offline during saccadic movement. Our vision is perceived as a smooth and unbroken stream, but that’s a result of the brain stitching visual information into a cohesive whole, and filling in blanks without us being aware of it.

Part one of [Anjul] and [Qi]’s method is to manipulate perception of a virtual area relative to actual physical area by making a person’s pivots not a 1:1 match. In VR, it may appear one has turned more or less than one has in the real world, and in this way the software can guide the physical motion while making it appear in VR as though nothing is amiss. But by itself, this isn’t enough. To make the mismatches imperceptible, the system watches the eye for saccades and times its adjustments to occur only while they are underway. The brain ignores what happens during saccadic movement, stitches together the rest, and there you have it: a method to gently steer a human being in a way that a virtual space is larger than the physical area available.

Embedded below is a video demonstration and overview, which mentions other methods of manipulating perception of space in VR and how it avoids the pitfalls of other methods.

RoadToVR wrote up the demonstration at GTC in an article but [Anjul] and [Qi] will be publishing all of the details as a SIGGRAPH paper this summer.

Gently fooling the senses can happen in other, less subtle ways. For example, by using Galvanic Vestibular Stimulation which — if one isn’t picky about terminology — could pass for a form of mind control.

54 thoughts on “Redirected Walking in VR done via Exploit of Eyeballs

  1. I have a condition called nystagmus which means that I’m basically unable to hold my eyes still. I wonder how well this would deal with it?

    It has an interesting relation to some of what’s covered in this article; although my eyes are constantly moving, the brain still stitches it all together into a coherent, stable image. It has some very strange effects, though. If you draw a series of vertical lines (like the number 111111111111), I can look at it and it looks perfectly clear. But ask me to count how many lines there are and I’m completely unable to because I can’t keep track of which one I’m up to. Even pointing a sharp object at each one, this is extremely difficult because I can’t easily use visual feedback to move the pointer from one line to the next. But turn the page through 90 degrees so that the lines are horizontal and stacked vertically, and suddenly it becomes very easy, because now my eye movement is perpendicular to the axis I’m trying to count on.

    1. That’s fascinating. I haven’t bothered looking it up, but I used to have the exact opposite problem. I was able to relax enough to completely stop the saccadic movement – the result being that within a couple seconds my vision would start fading from the outside in (tunnel vision?). With a bit of practice or when I was really tired I was able to relax until my vision was completely gone – just a greyish blur not a lot unlike static on an old analog TV. The slightest twitch, or any change in what I was looking at and the image would snap back like a light be turned on.

      I threw a prof in University for a curve in one class. We were to look at a grid – don’t remember why, some kind of eye test/experiment – I think we were supposed to try and determine if the lines were straight or curved. I was tired from a long week and was able to relax and make all the lines disappear. So I asked “what happens if the lines disappear?” – I think he thought I was drugged up or something as he wasn’t able to provide a good reason.

      1. When I was little I sometimes went asleep doing that, if you keep your eyes still then the data stream stops and you can fall asleep with your eyes open, so I looked at the ceiling not moving my eyes while in bed and fell asleep that way.
        The problem though was that I at that time also sleepwalked and I think there might be a relationship between the two.

      2. Our eyes get their huge dynamic range by what’s essentially an AGC function. The problem is that this makes them have a high-pass temporal response for any single sensing cell. Moving your eyes around works around this as it keeps the image on the retina moving at a rate above this high-pass cutoff. If you stop moving your eyes long enough the AGC will settle and you’ll see nothing (neither brighter or darker than long-term baseline). Kinda hacky, but how else are you going to make a sensor with more than 180dB dynamic range.

        1. “Kinda hacky, but how else are you going to make a sensor with more than 180dB dynamic range.”

          You know better than asking THIS group. Maybe ours will use unobtainium?

    1. yes, Plato would say “featherless biped”

      Plato set out to define “human being” and announced the answer: “featherless biped.” When Diogenes of Sinope heard the news he came to Plato’s school, known as the Academy, with a plucked chicken, saying, “Here’s the Platonic human!” Naturally, the Academy had to fix its definition, so it added the phrase “with flat nails.”

  2. I haven’t watched the video or read more than this article, but are the keeping an eye (ha!) on if and how this effects movement in the physical world?
    I’d think the perception of orientational information, from parts of our inner ears, would adapted to the manipulation in VR (given enough time) and hinder our movement back in the physical world without the VR manipulation (resulting in maybe drunken movements? – could be kinda funny to watch and experience).

      1. That doesn’t seem like a bad idea for some. My understanding is if the brain is perceiving mixed signals from the eyes and inner ears it assumes food poisoning and cues up a gastro-intestinal flush. Being able to condition oneself to not suffer such intense vertigo would be nice. (I can’t even look at a spinning amusement ride without getting a nauseating headache).

        1. It’s highly context-sensitive.

          People get car sickness, unless they’re driving themselves. Then the same disrepancy between eye and ear gets ignored because the brain anticipates it.

      2. I learned from listening to Cmdr Chris Hadfield talk that this happens to astronauts who spend time on the ISS. The body learns to rely on visual information for balance, because in microgravity the inner ear doesn’t provide useful data. Once you return you feel TERRIBLE as your body does the “throw up, go lay down” response to all the crazy vertigo. (It goes away as your body re-adjusts to normal gravity, happily.)

      3. The inner ear isn’t actually all that accurate. It’s mostly there for orientation during fast movements. These small scale manipulations probably wouldn’t be noticeable in the inner ear. It’s like walking blindfolded, you never go in a straight line, even though you FEEL like you are.

          1. There are more sensory systems involved in preventing a fall when you stand with your eyes closed. Think pressure sensors under your foot, muscle tension sensors, joint positions etc.

        1. Not to derail the conversation – but checkout Keepons. – discrete clear silicone(?) hooks that extend your glasses arms around your ear lobes. Sadly they are a consumable, but they definitely work!

    1. I don’t believe it should have any detrimental effect. The inner ear is good at detecting changes in rotation, but it’s not incredibly accurate and integrates error over time. The brain already relies heavily on visual input (or other tactile stimulus) in order to correct the signal from the inner ear. This technique induces that mechanism artificially, but the inputs aren’t any different or stronger than what you get when you’re walking around in reality.

    1. Hey Crazy! :)

      Joke aside, I think you’re wrong. How is this feedback different from current gaming systems? They also provide various kinds of virtual feedback(both optical and audio), and they have been repeatedly shown to improve short and medium-term coordination, with repeated use helping long-term as well (up to 1 year was tested if i can remember correctly from that TED talk I watched).

      The brain is more than capable (in healthy humans) to differentiate between the two feedback loops and switch to the one that best suits its present “needs”. Be it visual cues from a monitor and auditory from a headset, or a VR device.

      1. Edit button needed:

        Although I must admit the feedback loop of the saccades could be affected by prolonged use of this method? I mean the saccades are still being driven by muscles as response to certain visual (not necessarily) feedback. Obtaining unexpected visual results could hurt that feedback loop in the long run maybe? Especially if done wrong.

    2. There are behaviors of people in a VR where they anticipate the real world physics. A simple example is a VR escalator. When one gets to the top and steps off, one will rock a bit like you do when you step off a real escalator.

      I found it interesting they could guide the eye by brief flashes.

  3. This tech has been a long time coming. I remember reading an abstract in Science News in the late 90s about an experiment where people were using the previous generation of VR to influence direction of movement by controlling so called “visual flow” essentially moving the ground texture different than expected.

  4. I read about this idea quite some time ago – cannot exactly remember when or where, but I am sure it was around Y2k. Back then I had a discussion with someone experimenting with it (on, I admit, pre-modern “3d glasses”).
    The problem was that it only worked if you kept the screen quite small. I notice movement within a field of almost 170 degree and I can distinguish shapes (not colors) within 160 degrees without moving my eyeballs – if the screen was wide enough to give me anything like “almost real vision” (it would have to be covering 180 degrees of my eyesight, which is unlikely to be available soon), any distortion/displacement of “markers” that my brain picked up that didn’t match up in motion (speed) would immediately make me sick.

    My guess is that this tech is “well fitted” for modern “3d goggles” – narrow field of view and auto-blurring at the edges to keep data transfer manageable.

    1. That’s the point of this approach – there is no movement or distortion that you can see. The system shifts the entire viewpoint slightly but only when you’re in the middle of a saccade – so your eyes are switched off. When the saccade ends your visual system automatically compensates for any minor differences in the expected and actual viewpoint.

      The flashing markers are there just to trick you into changing your gaze regularly; they also shouldn’t be consciously detectable in normal usage as they’re subtle and removed before the triggered saccade completes.

  5. I have frequently seen assertions that our eyes “shut down” or “go offline” or that we “go blind” during saccades. However, in my experience, this is not true, and I can reliably get visual information during a saccade. One example is while viewing a video projection, best observed if it’s displaying a white bar against a dark background. Viewed normally, it’s simply white. However, if I saccade my eyes past it, perpendicular to the long axis of the bar, I can see it separate into RGB fringes, suggesting that the colors are projected in sequence rather than simultaneously. Another common example is viewing an object rotating fast enough to be a blur, such as the wheel of a car moving next to me on the road, or a fidget spinner in my hands. During a saccade past such an object, I sometimes catch a flash of detail as my point of view sweeps past and momentarily matches linear velocity with a sector of the wheel. A final example also comes from the road: LED tail lights. I know that many of them are PWM dimmed from “brake light” brightness, because during a saccade, I can see a dashed line of red.

    So, I guess my question is, am I abnormal in this regard, or is the conventional wisdom wrong?

    1. – You’re not the only one, I can’t stand following PWM brakelights at night, especially those that Cadillac’s seem to use (particularly bright and low PWM rate it seems). I have the same issue; saccade/eye movement causes a dotted/dashed trails from the LED’s. When I switched to (not even that cheap) LED Christmas lights, they drove me so nuts I put a bridge rectifier inline with them to cut down the flashing/strobe due to this effect (Even philips brand, they were only 1/2 wave due to just putting LED’s inline with the AC). Looking right at them was fine, but a saccade past (or anywhere with the lights in my peripherial vision) turned it into a strobe effect. Rear projection color wheel DLP TV’s that were popular for a while also produced a shimmering effect that didn’t seem to bother others, even when looking right at them, which made me wonder if maybe there are variances in non-saccade visual refreshes also…
      – Not sure if it’s a genetic thing or what, maybe some have a faster or slower ‘refresh rates’ than others, and some have some/varying (if at all) refresh rates during a saccade…

    2. No, it’s because your eyes are an analogue system and not perfect. High contrast edges and high-frequency changes can generate a stronger signal in the retina that persists beyond the saccade inhibition. So you can detect the limitations of the system if you actively go looking for them, but it’s still operating, and 99.9999% of the time you don’t notice it at all.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s