[Anjul Patney] and [Qi Sun] demonstrated a fascinating new technique at NVIDIA’s GPU Technology Conference (GTC) for tricking a human into thinking a VR space is larger than it actually is. The way it works is this: when a person walks around in VR, they invariably make turns. During these turns, it’s possible to fool the person into thinking they have pivoted more or less than they have actually physically turned. With a way to manipulate perception of turns comes a way for software to gently manipulate a person’s perception of how large a virtual space is. Unlike other methods that rely on visual distortions, this method is undetectable by the viewer.
The software essentially exploits a quirk of how our eyes work. When a human’s eyes move around to look at different things, the eyeballs don’t physically glide smoothly from point to point. The eyes make frequent but unpredictable darting movements called saccades. There are a number of deeply interesting things about saccades, but the important one here is the fact that our eyes essentially go offline during saccadic movement. Our vision is perceived as a smooth and unbroken stream, but that’s a result of the brain stitching visual information into a cohesive whole, and filling in blanks without us being aware of it.
Part one of [Anjul] and [Qi]’s method is to manipulate perception of a virtual area relative to actual physical area by making a person’s pivots not a 1:1 match. In VR, it may appear one has turned more or less than one has in the real world, and in this way the software can guide the physical motion while making it appear in VR as though nothing is amiss. But by itself, this isn’t enough. To make the mismatches imperceptible, the system watches the eye for saccades and times its adjustments to occur only while they are underway. The brain ignores what happens during saccadic movement, stitches together the rest, and there you have it: a method to gently steer a human being in a way that a virtual space is larger than the physical area available.
Embedded below is a video demonstration and overview, which mentions other methods of manipulating perception of space in VR and how it avoids the pitfalls of other methods.
RoadToVR wrote up the demonstration at GTC in an article but [Anjul] and [Qi] will be publishing all of the details as a SIGGRAPH paper this summer.
Gently fooling the senses can happen in other, less subtle ways. For example, by using Galvanic Vestibular Stimulation which — if one isn’t picky about terminology — could pass for a form of mind control.
I have a condition called nystagmus which means that I’m basically unable to hold my eyes still. I wonder how well this would deal with it?
It has an interesting relation to some of what’s covered in this article; although my eyes are constantly moving, the brain still stitches it all together into a coherent, stable image. It has some very strange effects, though. If you draw a series of vertical lines (like the number 111111111111), I can look at it and it looks perfectly clear. But ask me to count how many lines there are and I’m completely unable to because I can’t keep track of which one I’m up to. Even pointing a sharp object at each one, this is extremely difficult because I can’t easily use visual feedback to move the pointer from one line to the next. But turn the page through 90 degrees so that the lines are horizontal and stacked vertically, and suddenly it becomes very easy, because now my eye movement is perpendicular to the axis I’m trying to count on.
That’s fascinating. I haven’t bothered looking it up, but I used to have the exact opposite problem. I was able to relax enough to completely stop the saccadic movement – the result being that within a couple seconds my vision would start fading from the outside in (tunnel vision?). With a bit of practice or when I was really tired I was able to relax until my vision was completely gone – just a greyish blur not a lot unlike static on an old analog TV. The slightest twitch, or any change in what I was looking at and the image would snap back like a light be turned on.
I threw a prof in University for a curve in one class. We were to look at a grid – don’t remember why, some kind of eye test/experiment – I think we were supposed to try and determine if the lines were straight or curved. I was tired from a long week and was able to relax and make all the lines disappear. So I asked “what happens if the lines disappear?” – I think he thought I was drugged up or something as he wasn’t able to provide a good reason.
Do you know if there is a medical term for your condition?
I don’t think it’s really a “condition”. I think it’s just https://en.wikipedia.org/wiki/Neural_adaptation
Later on I read that one of the purposes of saccadic movement is to prevent the brain from ignoring visual stimulation (ie: stop seeing).
Most people can do it. Just needs concentration.
In older people, the saccades start to weaken so people automatically start to shake their heads to compensate.
When I was little I sometimes went asleep doing that, if you keep your eyes still then the data stream stops and you can fall asleep with your eyes open, so I looked at the ceiling not moving my eyes while in bed and fell asleep that way.
The problem though was that I at that time also sleepwalked and I think there might be a relationship between the two.
Dry out your eyes.
Our eyes get their huge dynamic range by what’s essentially an AGC function. The problem is that this makes them have a high-pass temporal response for any single sensing cell. Moving your eyes around works around this as it keeps the image on the retina moving at a rate above this high-pass cutoff. If you stop moving your eyes long enough the AGC will settle and you’ll see nothing (neither brighter or darker than long-term baseline). Kinda hacky, but how else are you going to make a sensor with more than 180dB dynamic range.
“Kinda hacky, but how else are you going to make a sensor with more than 180dB dynamic range.”
You know better than asking THIS group. Maybe ours will use unobtainium?
Or a 555
The phenomena you’re describing is called Troxler Fading, if you’re looking for a search term for finding out more.
I DONT WANT MY MIND TRICKED
One step closer to Holodecks!
My thought also.
I believe it was mentioned that this is how the Holodeck environments compensated for being smaller than the “real world”.
I thought Holodecks used a system of non-contact with the floor and moving scenery to give the “appearance” of moving forward.
That and inertial dampeners / artificial gravity to just move you around without any feeling of movement.
Idea is cool.
Humans are just like chickens.
But I could not listen to the robo voice.
Do humans taste like chicken?
B^)
no, more like pork that’s been stretched out of shape.
Reminds me of The Bangles 1980’s hit, “Walk like any chicken.”
https://www.youtube.com/watch?v=K__tPCUC8Xc
Maybe inverse saccadian compensation is why the woodcock walks like an Egyptian, or maybe they just “got rhythm”.
That bird is making eyes at me, I know it.
Whether I’m talking about that woodcock or about the Bangle’s break-up will show how old you are.
yes, Plato would say “featherless biped”
Plato set out to define “human being” and announced the answer: “featherless biped.” When Diogenes of Sinope heard the news he came to Plato’s school, known as the Academy, with a plucked chicken, saying, “Here’s the Platonic human!” Naturally, the Academy had to fix its definition, so it added the phrase “with flat nails.”
https://opinionator.blogs.nytimes.com/2016/04/04/of-socrates-cynics-and-flat-nailed-featherless-bipeds/
This is akin to waiting to the VSYNC to replace the image, no?
Interesting comparison, it’s very similar.
I haven’t watched the video or read more than this article, but are the keeping an eye (ha!) on if and how this effects movement in the physical world?
I’d think the perception of orientational information, from parts of our inner ears, would adapted to the manipulation in VR (given enough time) and hinder our movement back in the physical world without the VR manipulation (resulting in maybe drunken movements? – could be kinda funny to watch and experience).
even worse: If the manipulation in the VR is not constant (factor/rate/whatever) and varies the brain might train itself to ignore the inner ear because it sends ‘wrong’ signals…
That doesn’t seem like a bad idea for some. My understanding is if the brain is perceiving mixed signals from the eyes and inner ears it assumes food poisoning and cues up a gastro-intestinal flush. Being able to condition oneself to not suffer such intense vertigo would be nice. (I can’t even look at a spinning amusement ride without getting a nauseating headache).
It’s highly context-sensitive.
People get car sickness, unless they’re driving themselves. Then the same disrepancy between eye and ear gets ignored because the brain anticipates it.
I learned from listening to Cmdr Chris Hadfield talk that this happens to astronauts who spend time on the ISS. The body learns to rely on visual information for balance, because in microgravity the inner ear doesn’t provide useful data. Once you return you feel TERRIBLE as your body does the “throw up, go lay down” response to all the crazy vertigo. (It goes away as your body re-adjusts to normal gravity, happily.)
Does this work with Chickens?
The inner ear isn’t actually all that accurate. It’s mostly there for orientation during fast movements. These small scale manipulations probably wouldn’t be noticeable in the inner ear. It’s like walking blindfolded, you never go in a straight line, even though you FEEL like you are.
It is accurate enough that you can stand on one foot with your eyes closed.
There are more sensory systems involved in preventing a fall when you stand with your eyes closed. Think pressure sensors under your foot, muscle tension sensors, joint positions etc.
Maybe stick magnets behind the ears?
Hopefully, with magnets behind my ears, my glasses wouldn’t slide down my nose…
B^)
Not to derail the conversation – but checkout Keepons. – discrete clear silicone(?) hooks that extend your glasses arms around your ear lobes. Sadly they are a consumable, but they definitely work!
I had something like this in mind with my comment.
https://www.wired.com/2015/09/hacking-inner-ear-vrand-science/
I’m not sure if that would play hell with your innate sense of direction. https://en.wikipedia.org/wiki/Magnetoreception
The jury is still out on that one I believe.
I don’t believe it should have any detrimental effect. The inner ear is good at detecting changes in rotation, but it’s not incredibly accurate and integrates error over time. The brain already relies heavily on visual input (or other tactile stimulus) in order to correct the signal from the inner ear. This technique induces that mechanism artificially, but the inputs aren’t any different or stronger than what you get when you’re walking around in reality.
Your perception also adapts if you are at sea for a week or longer. And then if you are back on land, you think/feel the restaurant is moving slightly .
While the “hack” is interesting, it was also interesting to read about the saccades
Call me crazy but this seems like it would slowly decrease your coordination as you would be contending with multiple feedback behaviors. i.e. virtual and real.
Hey Crazy! :)
Joke aside, I think you’re wrong. How is this feedback different from current gaming systems? They also provide various kinds of virtual feedback(both optical and audio), and they have been repeatedly shown to improve short and medium-term coordination, with repeated use helping long-term as well (up to 1 year was tested if i can remember correctly from that TED talk I watched).
The brain is more than capable (in healthy humans) to differentiate between the two feedback loops and switch to the one that best suits its present “needs”. Be it visual cues from a monitor and auditory from a headset, or a VR device.
Edit button needed:
Although I must admit the feedback loop of the saccades could be affected by prolonged use of this method? I mean the saccades are still being driven by muscles as response to certain visual (not necessarily) feedback. Obtaining unexpected visual results could hurt that feedback loop in the long run maybe? Especially if done wrong.
There are behaviors of people in a VR where they anticipate the real world physics. A simple example is a VR escalator. When one gets to the top and steps off, one will rock a bit like you do when you step off a real escalator.
I found it interesting they could guide the eye by brief flashes.
This tech has been a long time coming. I remember reading an abstract in Science News in the late 90s about an experiment where people were using the previous generation of VR to influence direction of movement by controlling so called “visual flow” essentially moving the ground texture different than expected.
Very interesting article!
I read about this idea quite some time ago – cannot exactly remember when or where, but I am sure it was around Y2k. Back then I had a discussion with someone experimenting with it (on, I admit, pre-modern “3d glasses”).
The problem was that it only worked if you kept the screen quite small. I notice movement within a field of almost 170 degree and I can distinguish shapes (not colors) within 160 degrees without moving my eyeballs – if the screen was wide enough to give me anything like “almost real vision” (it would have to be covering 180 degrees of my eyesight, which is unlikely to be available soon), any distortion/displacement of “markers” that my brain picked up that didn’t match up in motion (speed) would immediately make me sick.
My guess is that this tech is “well fitted” for modern “3d goggles” – narrow field of view and auto-blurring at the edges to keep data transfer manageable.
That’s the point of this approach – there is no movement or distortion that you can see. The system shifts the entire viewpoint slightly but only when you’re in the middle of a saccade – so your eyes are switched off. When the saccade ends your visual system automatically compensates for any minor differences in the expected and actual viewpoint.
The flashing markers are there just to trick you into changing your gaze regularly; they also shouldn’t be consciously detectable in normal usage as they’re subtle and removed before the triggered saccade completes.
I have frequently seen assertions that our eyes “shut down” or “go offline” or that we “go blind” during saccades. However, in my experience, this is not true, and I can reliably get visual information during a saccade. One example is while viewing a video projection, best observed if it’s displaying a white bar against a dark background. Viewed normally, it’s simply white. However, if I saccade my eyes past it, perpendicular to the long axis of the bar, I can see it separate into RGB fringes, suggesting that the colors are projected in sequence rather than simultaneously. Another common example is viewing an object rotating fast enough to be a blur, such as the wheel of a car moving next to me on the road, or a fidget spinner in my hands. During a saccade past such an object, I sometimes catch a flash of detail as my point of view sweeps past and momentarily matches linear velocity with a sector of the wheel. A final example also comes from the road: LED tail lights. I know that many of them are PWM dimmed from “brake light” brightness, because during a saccade, I can see a dashed line of red.
So, I guess my question is, am I abnormal in this regard, or is the conventional wisdom wrong?
– You’re not the only one, I can’t stand following PWM brakelights at night, especially those that Cadillac’s seem to use (particularly bright and low PWM rate it seems). I have the same issue; saccade/eye movement causes a dotted/dashed trails from the LED’s. When I switched to (not even that cheap) LED Christmas lights, they drove me so nuts I put a bridge rectifier inline with them to cut down the flashing/strobe due to this effect (Even philips brand, they were only 1/2 wave due to just putting LED’s inline with the AC). Looking right at them was fine, but a saccade past (or anywhere with the lights in my peripherial vision) turned it into a strobe effect. Rear projection color wheel DLP TV’s that were popular for a while also produced a shimmering effect that didn’t seem to bother others, even when looking right at them, which made me wonder if maybe there are variances in non-saccade visual refreshes also…
– Not sure if it’s a genetic thing or what, maybe some have a faster or slower ‘refresh rates’ than others, and some have some/varying (if at all) refresh rates during a saccade…
No, it’s because your eyes are an analogue system and not perfect. High contrast edges and high-frequency changes can generate a stronger signal in the retina that persists beyond the saccade inhibition. So you can detect the limitations of the system if you actively go looking for them, but it’s still operating, and 99.9999% of the time you don’t notice it at all.
Yeah, interesting, but I have seen the idea already. Now that I’m thinking of it it might even have been here on Hackaday some months ago.