Smart Contact Lenses Put You Up Close To The Screen

Google Glass didn’t take off as expected, but — be honest — do you really want to walk around with that hardware on your head? The BBC recently covered Mojo, a company developing smart contact lenses that not only correct vision but can show a display. You can see a video from CNET on the technology below.

The lenses have microLED displays, smart sensors, and solid-state batteries similar to those found in pacemakers. The company claims to have a “feature-complete prototype” and are going to start testing, according to the BBC article. We imagine you can’t get much of a battery crammed into a contact lens, but presumably, that’s one of the things that makes it so difficult to develop this sort of tech.

The article mentions other smart contacts under development, too, including a University of Surrey lens that can monitor eye health using various sensors integrated into the lens. You have to wonder how this would be in real life. Presumably, the display turns off and you see nothing, but it is annoying enough having your phone beep constantly without getting messages across your field of vision all the time.

It seems like this is a technology that will come, of course. If not this time, then sometime in the future. While we usually think the hacker community should lead the way, we aren’t sure we want to hack on something that touches people’s eyeballs. Not everyone can say that, though. For us, we’ll stick with headsets.

45 thoughts on “Smart Contact Lenses Put You Up Close To The Screen

    1. probibly just put a coil in some glasses frames and use that for beamed power and near field high speed data. i dont think id want a battery in the thing, especially a lithium ion cell, i think supercaps might be a better option as a power buffer.

        1. a glasses mounted display would require significantly higher resolution to accomplish what a contact lens mounted display would as the contact lens mounted pixels are ALWAYS DIRECTLY in the wearers field of view. With the glasses mounted display you either have a high resolution in a small eye box, or a large eye box with a reduced resolution. Simulated foveation, tracking the eyes and rendering those portions within the FOV at a higher detail than the periphery makes for a reasonable accommodation in a large FOV reduced resolution stretched screen arrangement but cannot hold a candle to what could be accomplished with eye tracking and a contact lens mounted display. This combination leads, not only, to a persistent display but one which can be virtually extended to encompass the entire potential FOV of the wearer. Of course this is all assuming that contact lens display resolution can be brought as high as traditional display technologies…which may be a bit off still….but the futures so bright, even if you have to wear contacts and shades!

          1. Whatever the technology the limiting factor is going to be the durability and general hygiene issues of the lens. Deposits build up and visual acuity drops. It would be a long time before these barriers were removed. I get visual auras due to a Vitreous detachment. A natural process of ageing. Those auras are annoying enough without adding other ‘ interference into the FOV.

      1. I read about this in the 90’s, some company made a contact lens with an AR screen for divers. The control panel was fitted on the lower arm. They went silent for decades and now it’s as if this is a new invention. When companies like this go silent it usually means the DOD have snapped them up.

  1. forget putting this in contacts.
    put this tech in glasses!

    forget the camera, don’t need it. but link this tech to my phone, just gimme a heads up display for things like directions, the above boarding info, etc.

    and just keep the display simple, so it’s not blocking eyesight… and yup I suppose you’d need a driving mode where it’s either off or really out of the way so not to block your view of the road.

    1. Different tech needed for glasses.

      For a contact-lens embedded display, you need to cover the foveal region (~2° circle), but the display is locked to that region wherever you look. The visual system will ‘fill in’ periphery imagery based on the areas you look at before (but ONLY if the eye is accurately tracked!) giving full field of view image coverage. The hard part is getting the display in focus (image at infinity, display panel on surface of eye), accurate rapid eye-tracking, and not obscuring normal vision.

      For glasses, your entire display needs to cover the entire desired field of view, and that’s a BIG challenge for current optics. Holographic waveguides are stretching the limits of materials at barely 40° diagonal coverage. Birdbath optics (collimated displays like those use don flight simulators but strapped to your head) are enormous in comparison, but at least work. You also need top render the entire scene rather than just a tiny portion of it, increasing computational load.

      Neither solution is close to ready for prime time at the moment. AR today is in the same position as VR was in the 90s boom: we know what we need to achieve, and we know what the solutions should look like, but we don’t have the capability to actually do it yet.

      1. A version without cam might be more acceptable. I already dislike that every phone now has a cam, so we already are surrounded by phoneholes?

        At least they aren’t permanently recording like like body-cams, but I think we should disarm a bit when it comes to camera presence in everyday life.

        1. Yes we are. However, I don’t think we should stop having more cameras, not only because we can, but also because that’s the next step in the information era. Pedestrian dashcams are useful, and why shouldn’t we be able to record what we already see ? It will be abused, sure, but we don’t lack ways to abuse cameras, even without glassholes around.

          1. @Ostracus, I see how we can make a parallel, but I thought rather about if smartwatches are phones on an arm, smart glasses are phones on the head : It’s less about recording crimes, than about easily recording something that you would have taken out your phone to record

        2. I like cameras in public places. People tend to be on their better behavior, if they know there is a good chance what they get caught doing could be shared. Not many want to be national news, or the next viral video. There will be some, wanting that sort of attention though. But, for crime, it’s a good bet that they’ll get caught, prosecuted, and convicted based on video. It’s the potential for getting away with something illegal, that encourages many. Cameras are a great deterrent. There are places cameras don’t belong. How pictures and video used, should be regulated a little. Public, implies no privacy, but people shouldn’t be allow to profit, without consent.

    1. They are using solid state batteries, used in pacemakers aswell. Besides solid state batteries wont leak or explode. You can expect a battery being used inside someones body to be under very strict safety regulations allready.

  2. It’s not at all clear from the video or their website how the optics can work. The eyeball focuses on things far away. To have an image source at the eyeball surface requires optics at that location to do two things:
    1. It must have its own optics to produce a virtual image at a distance the eyeball can focus on. This requires lenses and non-trivial distance between the light (image) source and the lens element. It’s tough to see how they can do this in the sub-millimeter that’s available in the thickness of a contact lens.
    2. The optical element that produces the image, virtual or not, must subtend the visual field: a physically large element must intercept the visual field, even if that element is semitransparent. How do they do that here?

    Do the light-emitting elements actually transmit sideways, and reflect off the curved front surface of the device?

    More info needed. It looks a lot like smoke and mirrors so far.

      1. You still need some kind of optics to do the beamforming. And the scanning, if that’s how you do it (though they almost certainly don’t scan anything physical)

        1. As far as I know it’s more like an array of lasers than an array of led screens. The image is in a form of already collimated light – kind of projected on the retina directly.

          1. You still need optics: lasers or not, you first need to make the light collimated, and second, to point it in the right direction: each light source point must map to a different point on the retina. This *requires* some kind of optics. Tiny, holographic, whatever, it needs *something*.

            The question is what kind of magic are they doing to make it so thin. If the undisclosed magic isn’t real, then it’s fraud.

            (and, no, a laser isn’t naturally collimated, especially tiny chip-size ones. take the lens off a laser pointer to see how broad the beam normally is.)

  3. “Google Glass didn’t take off as expected, but — be honest — do you really want to walk around with that hardware on your head?”
    Being honest, Google Glass looks a lot less goofy than a baseball cap worn backwards.

      1. Neuralink is too scary for me.
        I think i will only need an RS-232 bandwidth link. My drop down eye options would be in heads up display with right eye “TEA” or “BISCUITS”. Left eye drop down menu would be “Take over the world”.

  4. Surely the best power source for this would be solar or thermoelectric, coupled with some sort of capacitor. Hopefully this won’t just turn out to be another platform for advertisers and big brother to take advantage of people.

    1. “Hopefully this won’t just turn out to be another platform for advertisers and big brother to take advantage of people.”

      But, we know it will be (sigh!)

  5. This reaks of one of those fraudulent kickstarter projects. It just requires too many technologies that don’t exist yet. Batteries the size of seasame seeds, displays that can somehow be in focus despite sitting on the lens of your eye. Not to mention it requires ultra-cutting edge electronics.

      1. No. They have not shown a product to anyone.
        They have shown a convincing simulation of a display. A rigged demo, if you want to be blunt.
        It’s not even an indication of a good proof of concept, because of what they don’t show — They don’t show how the optics can work, and never show the device actually in very close proximity to the eyeball surface.
        Perhaps it’s a subtle distinction to the “take my money” folks, but an important one — it’s what makes the difference between an idea and a product. Or a wishful demo and a fraud.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.