Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural

We love hearing about a good experiment, and here’s a pretty neat one: researchers used a VR headset, an off-the-shelf VR360 camera, and some custom software to glue them together. The result? Owl-Vision squashes a full 360° of un-distorted horizontal visual perception into 90° of neck travel to either side. One can see all around oneself, without needing to physically turn one’s head any further than is natural.

It’s still a work in progress, and accessing the paper currently doesn’t have a free option, but the demonstration video at that link (also embedded below) gives a solid overview of what’s going on.

The user wears a VR headset with a 360° camera perched on their head. This camera has a fisheye lens on the front and back, and stitches the inputs together to make a 360° panorama. The headset shows the user a segment of this panorama as a normal camera view, but the twist is that the effect from turning one’s head is amplified.

Turning one’s head 45 degrees to the left displays as though one’s head turned 90 degrees, and turning 90 degrees (i.e. looking straight left) displays the view directly behind. One therefore compresses an entire 360 degrees of horizontal visual awareness into the normal 180 degree range of neck motion for a person, without having to resort to visual distortions like squashing the video.

In a way this calls to mind the experiments of American psychologist George Stratton, whose fascinating work in visual perception involved wearing special eyeglasses that inverted or mirrored his sight. After a few days, he was able to function normally. Owl-Vision seems very much along those lines, albeit much less intensive. It’s apparently quite intuitive to use, with wearers needing very little time to become accustomed. Messing with perception via VR has gone the other way, too. Adding lag to real life is remarkably debilitating for interactive tasks.

The short video demo for Owl-Vision also includes a driving simulator demo in which the driver shows off the ability to look directly behind themselves with ease.

[Video: Owl-Vision: Augmentation of Visual Field by Virtual Amplification of Head Rotation, Augmented Humans International Conference 2024]

29 thoughts on “Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural

      1. Yeah. As somebody who has used trackir a bunch it’s sorta similar just with the extra benefit that you do not need to keep staring at the fixed in 3d space monitor as you move your head about.

        But yeah my brain adapted to trackir for fov expansion pretty fast to the point it felt somewhat similar to really turning your head to look far around even when your eyes was trained on the fixed screen. I could imagine the vr version working even better

  1. I have used VR headsets for quite a long time, but the only time I ever got motion sick was when I accidentally applied an amplification factor on the head rotation as well.
    I would love to know whether a wider field of view feels natural, but amplification of head rotation will probably make many people sick.

    1. I think it’s not a coincidence that the demonstrations all involve the user being more or less stationary. Even if getting used to the new sense of “where” one is looking is easier than expected, I can only imagine how disorienting it could be to try to move around as well.

      1. I’ve used a similar setup in games (Elite Dangerous, a space flight sim) to use head turns to look at the edges of my monitor to simulate a full 90 deg head look (it was fully configurable for both X and Y axis) and even when moving in full 3D space (albeit with my actual body staying stationary) it was remarkably natural and resulted in essentially zero disorientation or nausea, even when doing flips and rolls.

        I’m definitely going to try to build this in to the VR face mask I’m working on.

    2. As long as you are moving your head and the visual result is consistently exaggerated I don’t think it will bother most people – seems like most of the motion sickness comes from your view being moved while your not feeling it or intending to move. So a VR experience where the floor falls out underneath you will trigger most folk who suffer as you don’t feel like you are falling, and didn’t plan it, so your eyes are at odds with everything else, but ‘jumping’ deliberately you expect to fall so even though your inner ear isn’t feeling the motion it doesn’t matter.

  2. I’ve often thought about our limited FOV and how our brains deal with the edges of vision. With the upcoming advances in brain interfacing, where there are some pushes towards restoring some version of sight to impaired individuals, if we were to add on new sensors to sighted people would we be able to map that onto our perception of our surroundings. This seems to point to yes, that we have enough ability to process a larger field of view, just lacking the inputs.

    1. I guess it depends on what you mean by “see”.

      We only “see” a 2-5 degree cone.
      Everything else is just sort of object detection.

      It’s one of the reasons we move our eyes around so much.
      We focus on tiny little areas of detail, and everything else is just there so we don’t bump into that tree.

      We don’t even lose that much once we can’t directly see the tree anymore. We still know where it was, and we keep track of it unconsciously.

      As a side note, this is why having a conversation (phone or otherwise), listening to music, or an audio book is SO bad while driving.
      Our brains keep track of the objects around us. But when we talk to someone and are not facing them our brains replace our surroundings with a fake “conversation” place. Not literally. We don’t HAVE to imagine a physical space. But the conversation itself is a “place”. And humans cannot really do more than one thing at once. (If you think you can, your brain is just lying to you again)

      1. Listening to music or audiobooks actually improves my driving focus.

        I’m a bus driver, and spend 5-8 hours driving in a work day. The focus required to keep a 2.5×12.5m vehicle on the road is immense.
        Music and Audiobooks distract me from being distracted. They are background noise to me, but without them I start to daydream to entertain myself and that takes my focus off the road.

      2. Uh, no, that’s absolutely not universal. Not everyone’s brains work the same way; some people don’t even have an internal monologue, or can’t picture objects they have previously seen. Most people can walk and talk at the same time. Even more people can have music playing without ever interrupting their driving. People often use different schemes for the same tasks; and that can change which resources are needed. One person might count by speaking the numbers to themself inside their head, and so that would interfere with verbally speaking to someone else at the same time, while they might still be able to read or write or maybe listen. Someone else might visualize the numbers or the patterns of whatever they are counting, and so they wouldn’t have much trouble with talking but may not want to look at anything you show them until they’re done counting.

        I don’t know how many people are this way, but I don’t need to do much of anything physically or visually to speak to someone who isn’t present; I am very used to not seeing the other person and just having a directionless voice in that case whether it’s with a phone, a radio, with or without a headset, or even just nearby but outside of my field of view. Resource contention only comes in if either task begins to require too much of the same resource at the same time, or if the resource I run out of is the attention it takes to keep up with more than zero tasks at once. More than zero instead of one, because if you’re tired enough you may not be able to dedicate enough attention to complete even one of some tasks. But the normal level is enough for more than one normal task; almost nobody can’t do anything at the same time as anything else; they just might not be able to all do very significant things at the same time. And having one very easy task with no surprise resource demands and/or that can be dropped at any moment (music, podcast, conversation) can keep other things from creeping into your attention when your priority is driving. Getting bored and zoning out or daydreaming can be bad; pausing midsentence to brake and cuss is relatively fine.

        As for vision, in the past I could lean against a wall and be aware of approximately what was on it to both sides of me at once, or what was on the other side of the windows, for certain angles slightly past 180. It took a bit of attention/effort to interpret the peripheral vision rather than automatically just turn my eyes towards whatever I saw, of course. The maximum capability anyone has with peripheral vision can’t be just “object detection” which can be done with just memory or hearing or even awareness of air currents or heat on your skin sometimes. It at least is enough to get a sense as well of rough shape and nature and motion, even by default.

  3. Real life already HAS lag.
    Quite a bit of lag actually.

    We have signal lag because our nerves are sssslllloooowwww.
    We have processing lag.
    We have “command” lag because the nerves going in the other direction aren’t any faster.

    The only way we can even function is to predict patterns/movement, and the autopilot of reflexes.

    Your brain is lying to you if you think we live in a real-time world…

    1. Yeah, you could say your brain is lying to you, or you could think of it as having very effective anti-latency algorithms.

      I’m reminded of that one blind guy (brain-blind, not eyeball-blind) who nevertheless could perfectly catch a ball that was flying at his face. He never consciously recognized seeing the ball.
      The part of the visual system that was responsible for reflexes bypassed the damage which made him blind. Some part of the brain stem or spine or whatnot was able to very quickly process some data coming from the eyes and make him catch the ball. No frontal lobe necessary.

      1. IIRC, the part of the brain you are describing sits(?) along the optical nerves before the visual cortex (at the rear of the brain). So while the person can’t “see” (visual cortex) the motion information is processed before the “damage”.

  4. Next experiment: pigeon-vision, where you try to compress as much of 360-degree vision as possible into the human visual field. Kind of like cranking up the FOV settings in first-person shooters back in the old days of 4:3 monitors

      1. Fun fact! You already have an analogous motion when your eyeballs move from one thing to another; the brain trims that out and builds fake perception into it so that you don’t notice. Well, mostly don’t notice. You ever look at the second hand of a clock and think it seems like it’s not moving at first, just as you look at it? That’s the brain back-filling that moment with a copy of the ‘new’ visual scene. Pigeons do the same thing when they bob their heads, which is part of why they do it; they can’t move their eyes separately, so they stabilize their whole skull between movements.

        1. It can sometimes get trippier, as well. I’ve glanced down to the radio for a second at a red light, glanced back up expecting it to be green, see green, then red, and a second later it turns green.

    1. “In a way this calls to mind the experiments of American psychologist George Stratton, whose fascinating work in visual perception involved wearing special eyeglasses that inverted or mirrored his sight. After a few days, he was able to function normally.”

      The brain can adapt to these kind of input changes pretty quickly when it’s 1:1 – it’s basically just remapping the inputs. Our eyes naturally see everything upside down to begin with.

      I’d be careful about extended attempts of visual data that isn’t 1:1 to our natural vision, though. There was that (in)famous experiment where they put vertically polarized lenses on kittens, and, even after they were removed, they were unable to effectively interpret vertical boundaries and were stuck forever running into things.

      I forget the exact details (it’s been around 20 years since I read the research) as to whether they ever ended up with normal functional vision – usually research animals in these studies don’t have long, happy lives, as the only way to study the effects on the brain in detail involves a lot of slicing and dicing – but you don’t want to push adaptive neuroplasticity too far unless you’re prepared to potentially seriously screw yourself over.

      1. I alter my visual field for most of the year, wearing multifocal contacts to avoid having to use readers when doing fine detail work. During the winter their “zones of magnification” cause me issue snowboarding so I switch to monofocal lenses. I have visual distortions for a few days while my brain forgets the irregularities they induce. When I switch back the “zones” are annoyingly obvious for a few days then just sort of melt away as my brain readjusts to their presence. After a few days my brain switches and without fail I find myself second guessing whether I put the wrong lenses in because I cont see the zones anymore.

        I wonder if the issue with the kittens irreversible change wasnt related to their early life development of “bad base code” as a result of the experiment.

  5. As a monocular person most of my life I’ve always thought putting a camera in my glasses could widen my field of view. Now that VR headsets are commonplace, there might be some hope. If only the glasses weren’t so bulky…

  6. I’ve read sci-fi books where a soldier’s helmet video feed’s center 45 degrees is normal vision, and then it gets progressively compressed so that their peripheral vision includes directly behind them. This reminds me of that.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.