The holy grail of display technology is to replicate what you see in the real world. This means video playback in 3D — but when it comes to displays, what is 3D anyway?
You don’t need me to tell you how far away we are from succeeding in replicating real life in a video display. Despite all the hype, there are only a couple of different approaches to faking those three-dimensions. Let’s take a look at what they are, and why they can call it 3D, but they’re not fooling us into believing we’re seeing real life… yet.
My Superpower: I See Things Different (But So Do Many Others)
When you share a secret with the world you lose all element of surprise, but this one’s too good to keep under wraps. I have a genuine bona-fide superpower, one that all those comic-book authors would have no doubt incorporated into their characters had they only thought of it.
It’s pretty mundane really, no radioactive flies or anything were involved, instead I have a thing called a strabismus. It’s a congenital condition, my left eye looks about 20 degrees to the left of centre, and the superpower it conveys is that I can see over my shoulder to something of what is happening behind me. I should have a cape or something, but sadly our Hackaday overlords’ budgets don’t stretch that far.
In reality having peripheral vision that stretches at its leftward extreme much further than everyone else’s isn’t much use beyond a party trick, but having a strabismus has another more significant effect. Since the two views on the world that my brain could see while it was developing in my very early childhood were not aligned with each other, I don’t have stereoscopic vision. A quick read around the subject reveals that I share this to some extent with as many of 12% of people, so perhaps quite a few readers will understand. That’s not to say that I and people like me inhabit a 2-dimensional world as if viewing it through a TV screen though. Instead we see in 3D in a different way to people with stereoscopic vision, and as a result the subject of 3D imaging has become something of a personal fascination.
Types of 3D Displays
Stereoscopic Displays | If you think of a 3D display, the chances are you will first imagine a stereoscopic one. Watching Avatar in a cinema with a set of polarised glasses perhaps, or even the vomit-inducing Nintendo Virtual Boy console. Your brain is being tricked into believing it has a 3D object in front of it through the means of each of your eyes being individually fed a view of the same scene but from slightly different viewpoints. It’s easy to make a stereoscopic imaging system for the relatively trifling expense of an extra camera and a bit of display trickery, and delivering the correct image to each eye is a long-ago-solved problem.
Parallax Displays | There is of course another type of 3D imaging, that which uses parallax to give an impression of 3D. With these images there is an impression of 3D gained as you move around an object. The movement reveals sides of it that were previously hidden, and your brain uses that store of information to infer a 3D shape. This can be as simple as one of those novelty printed images that breakfast cereals used to give away for children with a plastic lenticular lens, it can be a holographic image, or an advanced electronic display system such as a rotating volumetric display or a flat lenticular display as Alex Hornstein demonstrated at our recent Superconference. It’s true to say that if such a display has enough resolution it could also be a stereoscopic one in that a different point of view could be presented to each eye, but since this approach works even when they lack that resolution it’s safe to say that parallax alone conveys enough for the brain to be fooled into seeing in 3D.
Hybrid Approaches | Finally there are displays that combine both stereoscopic and parallax effects to generate a 3D image. Immersive VR using a set of goggles such as one of the Oculus products provide the viewer with a primary stereoscopic image whose parallax changes as the point of view moves. The parallax effect is however not there with the primary aim of providing a 3D image, instead it is part of the effect of moving around the immersive world.
Perhaps All We Need is Parallax
As somebody without stereoscopic vision I have no use for 3D movies or headsets. I do however find myself in a unique position to evaluate the effectiveness of the parallax effect delivered by those display technologies that support it, because I can do so without the distraction of stereoscopy. My perceived 3D is perhaps best imagined as being done in software, I know the mug of tea next to my keyboard is a 3D object because my brain knows from experience that mugs usually are 3-dimensional, and it has learned to interpolate dimensions and distances from that experience supplemented by parallax information delivered as I move my point of view. It’s not perfect, for example I have a problem with fast moving objects such as a thrown ball, and for some reason I often grasp an inch short of drawer handles, but it’s otherwise pretty good. I am told that some people who grow up with stereoscopic vision but who then lose sight in one eye as adults do not successfully evolve this software 3D, it would be interesting to hear from readers with experience in the comments.
However for “in-brain software 3D vision” to work there has to be real parallax rather than just a semblance. The Oculus presents me with a very confusing image of two jarringly different stereoscopic views, and when I cover one eye to see only one picture I enter a very bizarre world of two dimensions with a few objects popping out in full 3D. These stereoscopic views are a one-size-fits-all viewing experience. If your visual cortex is accustomed to seeing from a different perspective from what you’re presented (and of course everyone’s wetware is slightly different so this will affect everyone) the perfect effect can never quite be reached.
A hackspace acquaintance had a 1st generation Oculus Rift dev kit a few years ago and we all had a try on an early demo set in an office. There was a house of cards on the desk, and for me it was as if I was seeing the room in 2D as if on a TV screen, with an unexpected full-3D house of cards jutting out of the scene. The parallax effect was probably slightly distorted compared to the real world or maybe it lacked sufficient resolution for me to be able to interpolate 3D from the whole scene, instead for other users the device is almost wholly reliant on the stereoscopic effect. By comparison the parallax-only technologies present a good impression of 3D to my perception. Laser holograms and volumetric displays produce the effect of a real almost graspable object within their space, as do lenticular displays with the strength of the effect being dependent on their resolution.
What this tells me with my unusual experience on the matter is that all the information required for complete 3D display perception is present as much in a parallax based view as it is in a stereoscopic one. And what several generations of the 3D cinema fad since the 1950s should also tell us is that while stereoscopic 3D unquestionably works for those with the ability to perceive it, it consistently fails to become popular while it requires the viewer to wear glasses or headsets. Plus, once you move your head, the lack of parallax data breaks any illusion of depth that had been created.
3D is So Close, Yet So Far Away
We are now several years into the easy availability of VR headsets and headset attachments for mobile phones, yet it’s still extremely rare to encounter someone actually using one. Sure we’ve all given it a try, but even with a significant amount of content now available in stereoscopic form it’s questionable whether they’ve moved beyond the same level of novelty that 3D glasses in the cinema had. To steal a phrase from The X files, I want to believe when it comes to 3D displays, but my eye condition notwithstanding, I think we still have a way to go.
Header image: Matthew Henry [CC0].
Stereoscopic vision only works because of parallax so it makes sense that parallax is the mother lode of 3D data (and, of course, much harder to capture). You can record a stereoscopic movie with only 2x the data rate as a normal movie but to reproduce correct parallax in the presence of arbitrary head movements (viewpoint changes) you end up needing full holographic images or a complete enough description of the scene to render it as seen from any viewpoint. That’s a lot of data to manage and if you’re displaying it on non-holographic displays you’re also signing up for a lot more computation as you must render each viewer’s viewpoint(s) in real time.
Makes me wonder if we’ll see calibration routines in the future that will measure exact location of viewer’s eyes and ranges of motion, then render specifically for those measurements.
Lenticular displays are pretty cool (I’ve seen the Looking Glass displays at a couple of conferences) but I think the more feasible for total immersion is going to be goggle-based stereoscopic with insane amounts of video processing behind it.
You can do it even today but it is both not necessary (apart from knowing the interpupillar distance which is needed to be able to correctly fuse the stereoscopic images instead of seeing double) and it doesn’t solve the fundamental problem that the parallax data is simply not available in a typical 3D movie – as soon as you move your head the view is slightly (or a lot) “wrong” because the image remains the same.
In VR your head is tracked and the image changes (it is rendered from a different point of view) depending on your head’s position/orientation.
Yes, the VR rendering is what I’m referencing. Isn’t this rendering going to be very sensitive to where your eyes are positioned (distance apart, one slightly further forward than the other, resting angle of gaze). To approach perfection, you have to present the scene as if it is photons naturally bouncing off of the items in front of the view. I would think that precise point-of-view of each eye for the rendering is how you mimic that.
Most of the time that level of accuracy is not required. The scene might appear scaled up/down by 5-10% if your interocular distance is vastly different from the norm, but mostly the brain just compensates for that.
Now, the Hololens, as a mixed reality application, does require your interocular distance so that the holographic display matches the real world geometries you’re interacting with. When you buy one, you have to give them your perscription.
Fun fact: most eye doctors are loathe to share those numbers, since it would allow you to (gasp!) buy eyeglasses online! So at Ignite! last year, Microsoft had people in all the Hololens demos measuring and writing down the needed information in case you want to buy a Hololens later. You give them the number when you buy, but you can also connect a USB cable to your computer and update it later (which is how they were doing it at the conference)
Wasn’t one of the claims about light-field cameras was that they could achieve a parallax effect with the right software? Maybe filming with that type of camera would help remedy the issue you described. It wouldn’t be something you’d see at a theater as every viewer couldn’t have their own small changes to the point of view of an image on the screen but a film watchable on 3D goggles could be processed to account for minor head movements.
If nothing else it seems like it would be more immersive, like when I took my sister to see Zootopia in 3D at the theater. The big jump out of the screen scenes like the flyover into the city weren’t that big a deal and nothing I hadn’t seen before but the more understated details like the characters clothing and fur was what impressive me. It just gave the movie a lot more depth like it was a movie that happened to be in 3D rather than a 3D movie; if that makes sense to anyone else.
The second holy grail to, “you are the movie” is “you are in the movie”.
Well, there is the parallax of apparent position against a more distant reference, and there is also the angle of the two eyes for closer objects regardless of background, isn’t there? You can tell by looking at their eyes of anyone with stereo vision if they are looking at objects near or far. I mean, training the eyes to have angles for infinity when looking at something a few inches away is not easy – like the stereo pairs of photos for a steriopticon. The great theoretical physics two volume Morse and Feschbach has diagrams as stereo pair drawings for example.
And I’m pretty sure the brain does some image reconstruction from projections (like CAT scanners), which you would get by moving around an object. I don’t think of parallax as a process involving motion as described in the write-up. Maybe that is the same thing.
This sounds like a hack waiting to happen…
Make your own 3D headset, with one screen off axis by 20 degrees.
Sorry but that is both completely ignorant and incredibly rude.
You don’t achieve parallax by offsetting a screen! Parallax happens because parts of the scene are moving at different speeds once you move the head due to them being at different distances from each other.
If you want to see the difference, imagine looking at room and moving your head – the objects closer to you move more than objects farther away. Now if there is no parallax cue it is like looking at a photo of that room instead. No matter how much you move the head, everything will be moving at the same speed and the same distance relative to each other.
A VR HMD achieves parallax if the scene is built correctly and the objects at their correct depths. But even then you will perceive it only at objects that are relatively close to the camera. If the scene is “faked” and relies only on stereoscopic disparity (common with movies that have been converted to 3D or VR objects that are moving with the camera even though they have depth), the parallax cues will be all wrong and disturbing.
From article:
“As somebody without stereoscopic vision I have no use for 3D movies or headsets.”
My reply:
This sounds like a hack waiting to happen…
Make your own 3D headset, with one screen off axis by 20 degrees.
You should stop Wolf, you’re making yourself look like a fool and an ass.
I think Wolf’s comment was that building such a headset would be interesting as it would let the author or someone else with the same condition try to compare a stereoscopic 3D view to their usual parallax based view by having the displays placed line of sight of each eye and displaying an image as if it was being seen by someone without the condition.
I don’t know if that would be possible if the brain is trained not to interpret vision stereoscopically but it could be interesting thing to test.
There might be something to that, current VR headsets are akin to wearing goggles, giving a tunnel vision effect. With both screens matching her pupils’ direction of gaze Jenny might gain some of her superpowered peripheral vision in VR that everyone else would lack.
I think Wolf is saying that the author could, through the construction of a custom headset, conceivably experience 3D like someone without strabismus does.
I think what Wolf is missing is that the author went through the brain-development stages of life with strabismus and thus their wetware is permanently configured for that situation. As a result, 3D would not be perceived the way someone without strabismus is accustomed to perceiving it, even with a specialised headset, making it a pointless endeavour.
I’m inclined to agree. Part of me would still like to try it though just to see how my brain would try to resolve it.
I think it would be an interesting experiment to try, that is why I mentioned it.
Check out the research of https://en.wikipedia.org/wiki/George_M._Stratton
“…He started his binocular vision experiments as well. In these experiments, he found himself adapting to the new perception of the environment over a few days, after inverting the images his eyes saw on a regular basis. For this, he wore a set of upside down goggles, glasses inverting images both upside-down and left-right…”
There is a slight chance it would be uncomfortable for her. It would be worth a shot, but yes her brain most likely would reject the effect immediately. She probably would experience headaches and nausea. HOWEVER, science…we need to know, right? Perhaps there is something else going on that we haven’t yet discovered. Maybe her brain could adapt quickly, providing a short glimpse into the world of stereoscopic depth–if she doesn’t experience that currently. Merely closing one eye and being able to tell approximately how far away something is, is what I suspect she has done as we learn about the world around us. Stereoscopic depth perception should work straight away for her…but who knows.
“My reply:
This sounds like a hack waiting to happen…
Make your own 3D headset, with one screen off axis by 20 degrees.”
I don’t know why you are being criticized, Wolf, because you’re technically right that this headset could be built and it would work for strabismus vision. Of course you’d have trouble getting content recorded from the right perspective without recording it oneself, but VR and 3d games that render each eye independently would work assuming they allowed you to set the parameters.
That wouldn’t be necessary. The simple fact that the screens line up with the eyes would give her stereoscopic depth perception. No content creator would need to make special arrangements. It’s no different that having a super wide interpupilary distance. It’s so wide that is no longer parallel with the other eye, so you simply swing the screen to match the direction of view.
My dude, I get it. Don’t worry about these folks. Jenny can’t use VR because she has special eyesight requirements. I can tell you meant that VR devices might benefit if we designed them so that the screen axis could be shifted, in addition to interpupilary distance. Lots of factors go into immersive VR. We don’t all have straight eyesight, and a lot of us even have a large variety of IP distances. Most 3D glasses or goggles were designed with the “average” human in mind. I believe that they missed the point that it might make things more comfortable for everyone if individual parts could be shifted until eyestrain was relieved.
I think the OP is referring to my 20 degree left eye offset. Nice idea, but it might not work as I don’t have the 3D rendering ability to resolve the result that someone with stereoscopic vision would.
I also have this condition but had surgery to have my eye straitened to look directly forward. Unfortunately, the operation occurred after my brain decided my make my eyes see separate images. So none of those types of 3D work. I have to close one eye (or just ignore that eye) to watch stereoscopic 3D movies and it turns out to be a slightly blurry, darker version, of the video. 3D movies with the glasses just do not work for me.
Thank you for the report, because I was also wondering what one would see with a custom VR headset (or simply video 1X magnification binoculars) with the displays aimed at the pupils. I guessed it would be kind of headache-inducing.
I think he believes that such a headset would allow you to see as if had correction surgery. I don’t know if that’s true or not, but the fovea of both eyes would be looking at the same portion of the image.
It’s not a novelty in the cockpit simulation genre. Being able to look out the window to see the runway to your left, or look up at the craft above you that you’re trying to chase down, makes a big difference, and the sense of presence is significant.
And a real PC game or simulator (not one of those novelty cellphone things, or a “3d video”) gives you plenty of parallax cues. You can even look behind things.
In some games the parallax is wrong though, the screens become your virtual eyes, such that when you move your head your eyes are effectively on stalks extending several centimetres from your real eyes. It’s especially annoying when you’re playing a game where you have a body and looking down puts your eyes in your chest.
Re: “To steal a phrase from The X files, I want to believe when it comes to 3D displays, but my eye condition notwithstanding, I think we still have a way to go.”
The problem is not really the technology (which is pretty much good enough already, some rare issues such as the inability to perceive stereo notwithstanding) but the fact that for the most consumer VR or even AR don’t have convincing applications that would make them put up with the hassle of having to wear the goggles, manage batteries, cables and what not.
Apart from games there is little else – watching movies using these googles is a pain (a full HD film in high res stereo glasses looks *terrible*) and even the games are lacking. Shooting monsters/zombies/whatever gets old fast and there isn’t much other content available, especially not one that one would want to come back to. Oh and the vendors being keen to lock you in into their proprietary walled gardens at every opportunity is not helping matters – Steam vs. Oculus vs VivePort vs MS Store aside, just count how many 360 video “experience platforms” are being sold on each of these, each being nothing more than an app to make you buy crappy videos and pay yet another subscription.
The main area where VR is (and has always been, even before Oculus, etc.) is the professional field. Professional applications are thriving, people have more work than they can take on. Training applications, visualizations, medical applications, engineering, military stuff, you name it.
This has been going on since 1990s but the availability of cheap HMDs in the recent years made the market literally explode. But it isn’t the flashy, hyped-everywhere stuff so most don’t realize this exists. Something like a trash removal training simulator just doesn’t sound that sexy, even when it is for a submarine where if you screw up, you could sink the boat and kill the entire crew.
Thinking AR/MR would be the one with more growth. Truly a day to day thing. Even video games embrace the “reality”+computer generated paradigm.
This reminds me of something I read yesterday from CES:
https://www.engadget.com/2019/01/08/intel-studios-volumetric-grease/
They used over 70 cameras to capture a performance, and shift the perspective depending on where the viewer is standing. If I had Jenny’s superpower of parallax 3D processing, this would probably look extra cool. With my ordinary stereo vision I don’t know if the effect would be sold quite as well.
Another point that I think has yet to be well-addressed is the focal distance of 3D objects. If your eyes are focusing on a single plane (i.e. a screen) it seems that it will be harder for your brain to believe that you’re truly in a 3D environment. You want to focus your eye differently for mountains in the distance than for the control panel right in front of your face. I feel like that’s one big thing that still leaves me feeling queasy when I use VR, in spite of the many other advancements. And it may be another piece of Jenny’s “in-brain software 3D vision” that she uses without realizing.
One of the things that pisses me off about 3D graphics is the focus. I’m fairly near sighted, so a lot of things at distance are fuzzy. When people make 3D images or movies of real environments with cameras, they have a set focal distance. It’s really disorienting to look at a 3D image and feeling unable to focus because the distance you are looking at is out of focus of the 3D camera. This, along with the parallax thing makes 3D really obnoxious to look at. I think all of these things can be solved in software fairly easily with fully CGI’d environments. (Especially the focus thing)
I’ve been taking stereoscopic photos since 1984. Two things are paramount, and much more important than in flattie photography: resolution and depth of field. Naturally the two fight against each other.
Resolution is needed because the eye examines fine detail. I used ASA64 slides, so the resolution in a 35mm chip was higher than 3000 by 5000. (100ASA was noticeably worse).
Depth of field is needed since the eye naturally explores all the frame, and it is extremely awkward if you fail to force something into focus.
Wonder if that could be done by tracking Iris and pupil measurements? Maybe there’s enough information about what you’re focusing on encoded in the outward appearence of your eyes to infer focal planes. Could be cool
I’m pretty sure they’re doing it with the Magic Leap One headset.
They have two focal planes, and flip into the closer one based on eye tracking. When your eyes go more crossed, you’re looking at something closer.
That would be easier but I wonder if you could get more granularity if there way a way of inferring the state of your lens (in your eyeballs) based on what was visible externally.
I’m sure there’s research on that.
https://www.researchgate.net/publication/8268241_Automated_detection_of_ocular_focus
More importantly they are using it to determine what to spend time rendering on.
Not looking at this super complicated tree… don’t bother with the 4k texture!
Many companies are working on larger resolution and wider field of view headsets but much of their market (me included) might not have computers that are capable of running such a beast. You got to remember, the computers are rendering two images from different angles at 90hz or more. I’m guessing they are using depth of filed for this processing saving effect too.
“No capes!” – Edna Mode
:)
It should be noted that scientists have successfully used VR headsets exercises to regain stereoscopic vision after a lazy eye/cured strabism. So far, i don’t think any of those techniques ever escaped the lab.
https://games.slashdot.org/story/13/04/24/0313213/play-tetris-to-fix-your-lazy-eye
(and for me, shadows on objects play a much greater role than my memory of their shapes)
Funny…
I’ve never heard the term “strabismus” bevor. And now in a matter of days twice.
The other occurence was in a presentation from 35c3 with the title “hacking how we see” (didn’t look up a link to it, but should be easy to find through a search). This presentation might be quite interesting for the author.
Thanks, I must look for it.
I was going to post it aswell it was a very good talk.
It’s actually really easy to experience 3D with only one eye, even for a person who’s always had 2 normal functioning eyes. Just close one eye and move your head back and forth. This also demonstrates a part of why having 6 degrees of freedom in a VR headset is a huge leap over just 3 and why the first generation oculus dev kit was such a let down. If you don’t normally have stereoscopic vision, being able to move your head to get a different view is really important.
I made a pair of mono-glasses so that my girlfriend, who can’t perceive 3D, also could join when our friends and us went to 3D cinema.
She says it works great!
http://tim.gremalm.se/monoglasses-accessibility-for-people-having-problem-with-3d-cinema/
Another vision problem that causes issues with “3D” is when you have greatly varying amounts of near or far sightedness. I take a -5.50 in my right eye but only -3.00 in my left eye. They’ve changed some over time, mostly the right eye getting worse. I didn’t have proper depth perception until getting contact lenses at 17. Been wearing contacts exclusively for 30 years now. With glasses I get headaches and under or over reach for things because I can’t tell how far away they are.
I recall when I first got contacts that everything ‘popped’ like the world had suddenly acquired new dimension. I was finally able to swing a racquet and hit the ball with the strings instead of whiffing past the end or hitting with the top of the handle. Same thing with volleyball, hit ball with hand instead of my arm or missing it.
Odd thing is with both eyes I have pretty sharp distance vision but a bit of blur with either eye closed. I do have some minor astigmatism that’s not corrected.
As with most people in their 40’s, now I’m getting presbyopia. Good thing I can adjust the text size in the Kindle app on my phone. I used to be able to focus as close as 1″ with my right eye by pushing the contact lens off to one side. Was very handy for reading tiny print on chips.
I don’t have problems with the RealD-3D movies that use left and right circular polarized light, though I likely would if I wore glasses.
There was a talk about strabismus and the use of VR googles at the 35C3. I guess that might be interesting for you, Jenny and the other strabiists here: https://media.ccc.de/v/35c3-9370-hacking_how_we_see
Based on the comments above i just like to point out that the writer will be able to see 3d with a normal VR headset, no special headset needed. People who’s eyes dont align (or who have just 1 eye) perceive 3d trough motion, and their brain keeps a more detailed map of depth then most of ours, to help fill in the blanks when she’s not moving.
To be blunt, she’s not a tripod with 2 cameras, shes a moving human, since her brain controls the movement, its not very hard for it to extrapolate depth, even in a VR headset.
My mom only has 1 eye, and she experiences VR as 3d, while 3d movies/tv doesn’t work (because that doesn’t track the user’s head)
This year Leia and Ultra-D TVs and devices will arrive to stores (there’s already the Red Hydrogen One using Leia), they allow to see holographic images (inside the screen), 3D blind people can still see depth by moving, as you move you can see other angles like in reality. Is like an open windows to the world.
Strabismus is often correctable with prescription eyeglasses that include angular correction, usually with a wedge prism of the correct power and angle in addition to any other needed corrections, each of which is normally configured as a part of the lenses themselves. An ophthalmologist can test for this correction. Most of the replies above show a need for visual education. This information has been known for many years. A good start may be to read the books and proceedings that are all available for free in the Virtual Stereoscopic Library: http://www.stereoscopic.org/library/ I would recommend first reading the McKay book, since it is written in a very simple easy-to-understand manner, with many of the words and terms defined. These definitions are very useful not only for this book, but also for understanding the other books and proceedings, which tend to be more technical.
Many of the comments on this subject indicate that several people do not appreciate the importance of stereoscopic 3-D vision. While it is true that stereoscopic images are often created incorrectly, some perception of stereoscopic depth is usually much better than none at all. Most images currently seen are in monoscopic 2-D, which completely lacks one dimension, the Z axis, thereby being an extreme distortion. With 2-D, everything from the closest object to the furthest, sometimes even all of the way to infinity, is squashed down to a flat plane, so that everything is the same distance, whatever the distance of the screen or other image plane (such as an analog film slide pair or a stereo card) happens to be. No matter what other monoscopic depth cues are present, 2-D is always a severe unnatural distortion of reality.