After seeing the cheap transparent OLED displays that have recently hit the market, you might have thought of using them as an affordable way to build your own wearable display. To save you the inevitable disappointment that would result from such a build, [Zack Freedman] took it upon himself to test out the idea, and show why transparent wearable displays are a harder than it looks.
He put together a headband with integrated microcontroller that holds the transparent OLED over the user’s eye, but unfortunately, anything shown on the display ends up being more or less invisible to the wearer. As [Zack] explains in the video after the break, the human eye is physically incapable of focusing on any object at such a short distance. Contrary to what many people might think, the hard part of wearable displays is not in the display itself, but rather the optics. For a wearable display to work, all the light beams from the display need to be focused into your eyeball by lenses and or reflectors, without distorting your view of everything beyond the lens. This requires, lightweight and distortion-free collimators and beam splitters, which are expensive and hard to make.
While these transparent OLEDs might not make practical heads-up displays, they are still a cool part for projects like a volumetric display. It’s certainly possible to build your own smart glasses or augmented reality glasses, you just need to focus on getting the optics right.
Correct, you need an optical system that projects the image (virtual image) at least 25cm away from the eye. https://cdn.hackaday.io/images/2658771475041485969.jpg
As a severely nearsighted person, I beg to differ. I can focus perfectly at 10 cm! 🤪 This project could *almost* work for me, I just need to mount the display at the end of a duck-billed hat!
But you really need to focus the overlay out at whatever distance the person will be looking at in _real_ reality.
Infinity is nice if you’re outside, a few meters is good inside, and maybe arm’s length is a good minimum?
Getting it to change focal distance with the person’s eyes would be a sweet trick. (Left as an exercise for the ambitious commenters.)
For practical purposes, infinity and about 3m are the same as far as the human eye’s depth of field is concerned.
That would be correct if we were speaking of parallax, but this is only monocular, so only the accommodation portion of depth cues comes into play, and is entirely dependent on the individual eye’s accommodation capability.
We had a few functional prototypes of a variable focus system at an augmented reality company I worked at a few years ago. Each display has a pair of linked, adjustable, but inversely proportional lenses. The trick is to de-focus the world, place the virtual image down, and then re-focus the world. This allows you to shift the focal depth of the virtual image without affecting the rest of the world.
We could change the focus of the image in under 25ms – so we could see when you started blinking, wait until your eyes were closed, then change the focus before you opened your eyes again – which was absolutely bananas and fascinating to see.
That’s awesome!
How long until we see sensors that can figure out how you’re focusing your eyes, and adjust accordingly?
It’s already relatively easy to do with eye tracking cameras using vergence. Figure out the vector of where each eye is pointing, that’ll give you a fairly accurate focal plane, especially sub-2 meters.
I haven’t seen anything that can measure what the eye is doing with regards to accommodation – it’s way harder to figure that out, since there’s a lot of individual calibration necessary, and all of the moving bits are internal to the eye so it’s much harder to see what’s going on.
> inversely proportional
Can you elaborate on that? I don’t think I have seen any design involving a concave lens as a significant element
Of course the solution is to make yourself extremely near-sighted using a contact lens in one eye.
Could you use a contact lens to make you extremely nearsighted, then have a lens to correct for it, with the screen between the two?
If not, it might be more effective to just use an eyepatch on that eye since with such severe nearsightedness it’ll be quite hard to see anyway, and use a higher resolution (and with color!) opaque screen, possibly with a camera outside the eyepatch to have an image to put underneath the overlay. If that part is done carefully enough you might even be be able to retain depth perception.
It /is/ an augmented reality display. Just the other way around: /I/ can see this guy’s eye augmented by the info on the OLED :-)
need a one way mirror on this
I can make build whatever I wanna do.
Technically, if you had a high enough resolution display, you could combine with a micro lens array to produce a near eye light field display,
For added points, a high resolution spatial light modulator (e.g LCD) can be used directly to shape transmitted light into a viewable image by taking advantage of Fourier optics. Basically the LCD displays the Fourier transform of an image at infinity and the eye does the rest. Unfortunately this only works with monochromatic light so a filter might be necessary.
Pretty long article for “because you can’t focus on it”
You think that’s long, you should watch the whole video!
But it’s also worth noting that there are a few multi-billion-dollar firms working on the “you need to focus on it” part. Doing the optics right, with the right field of view, resolution, and making it wearable, comfortable, and/or transparent is non-trivial.
Don’t hate the player, hate the game.
Like, share, and subscribe. Don’t forget to ring that bell.
I don’t see as a problem not being able to see through, so the solution of the lens between the eye and the screen looks fine for me.
Chopper/pilots, drivers, tank gunners, maybe someday, pitchers & catchers.
https://hackaday.io/project/179843-low-cost-augmented-reality-vr-for-microcontroller
You can get around that with some caveats (and still not easy, just possible). From a mathematical standpoint the blurring can be represented as a convolution, you can apply a deconvolution filter (it’d be calculated per individual for optimal results) displaying an image that is blurred to an outside observer but appears sharp to the individual wearing the screen. It gets washed out because the deconvolution filter results in negatives that can’t be displayed, so you have to normalize it for the ability of the display screen (unless you have a holographic/3d display). Final result would be a washed out image that is clear. It’s been done, I think the hardest part to make it useful is a cheap and easy method or device to get the right deconvolution.
This is a 2D view, but the light rays propagate in 3D, so a simple convolution isn’t enough, you need to capture the direction of each light ray, or equivalenty, the phase information.
Maybe just use lasers for unidirectional light. Lasers direct in the eye is always the best idea.
Creating an “unblur” filter is harder than it sounds.
Because information is lost when something is blurred, it is really hard to set it up to gain clarity (information) by the process of blurring.
Additionally, changes to one source pixel has effects on many output pixels, so, you have to somehow avoid conflicting pixels.
You could if you had robot eye’s…
I always wondered why robot eyes, like the T1000 Terminator, had overlays on it’s vision system to collect and process data. Why didn’t the T1 designers have all that data processed directly in the main chip? What was Miles Dyson thinking?
Just plug directly into the eye nerves, problem solved.
or use TMS with sufficient resolution to stimulate the visual cortex.
Agreed with the very nearsighted. This would work great for me mounted to a helmet for 10cm or less distance, support and stability. Market it first to those for whom screens are the biggest pain, the myopic!
I’m very nearsighted, and can read this article on my phone at about an inch from my nose (probably less than 5cm from my eye, or about double the distance my glasses sit at).
Though, it would be rather hard to see anything beyond the screen given how nearsighted I am, so, it’s not really a win there if you can’t put some corrective lenses on the far side of the screen.
Speaking as someone who knows very little about optics, presumably putting a normal glasses-style lens on the outside of the screen would work? (stronger than your normal prescription, as you’ll be focusing on something at the near-limit of your eye, so with your normal prescription things would have to be at the near-limit-of-focus-with-glasses-on to be in focus, but going more negative on the diopter count should bring more distant things into focus even when you’re focussing near
(although I’ve just googled the focussing range of a normal eye, it’s about 10 diopters, so that’s gonna be quite a chunky lens)
I remember skimming through a paper about tracking the eye’s accommodation in order to present the virtual image at the same distance. I think it worked like an optician’s autorefractor but with infrared light.
You don’t even need a transparent display. Process the real world through a tiny camera, project it in front of the eye-in focus and do the data overlay on the camera image through software. Boom!
Apple says ello in 2023
I wonder if they could work with a 10-12cm wide transparent OLED, mounted far enough away from the eyes that they can be focused on, and a fresnel lens in between the OLED and your eye, similar to the fresnel lenses used in VR headsets. Oh wait… then the rest of the world would be out of focus. Darn.
Just put another lens at the same distance to put the world back in focus. (Then a prism to flip the image back)