It’s becoming somewhat of a running gag that any device or object will be made ‘smart’ these days, whether it’s a phone, TV, refrigerator, home thermostat, headphones or glasses. This generally means somehow cramming a computer, display, camera and other components into the unsuspecting device, with the overarching goal of somehow making it more useful to the user and not impacting its basic functionality.
Although smart phones and smart TVs have been readily embraced, smart glasses have always been a bit of a tough sell. Part of the problem here is of course that most people do not generally wear glasses, between people whose vision does not require correction and those who wear e.g. contact lenses. This means that the market for smart glasses isn’t immediately obvious. Does it target people who wear glasses anyway, people who wear sunglasses a lot, or will this basically move a smart phone’s functionality to your face?
Smart glasses also raise many privacy concerns, as their cameras and microphones may be recording at any given time, which can be unnerving to people. When Google launched their Google Glass smart glasses, this led to the coining of the term ‘glasshole‘ for people who refuse to follow perceived proper smart glasses etiquette.
Defining Smart Glasses

Most smart glasses are shaped like rather chubby, often thick-rimmed glasses. This is to accommodate the miniaturized computer, battery and generally a bunch of cameras and microphones. Generally some kind of projection system is used to either project a translucent display on one of the glasses, or in more extreme cases a laser directly projects the image into your retina. The control interface can range from a smartphone app to touch controls, to the new ‘Neural Band’ wristband that’s part of Meta’s collaboration with Ray-Ban in a package that some might call rather dorky.
This particular device crams a 600 x 600 pixel color display into the right lens, along with six microphones and a 12 MP camera in addition to stereo speakers. Rather than an all-encompassing display or an augmented-reality experience, this is more of a display that you reportedly see floating when you glance somewhat to your right, taking up 20 degrees of said right eyepiece.
Perhaps most interesting is the neural band here, which uses electromyography (EMG) to detect the motion of muscles in your wrist by their electrical signals to determine the motion that you made with your arm and hand. Purportedly you’ll be able to type this way too, but this feature is currently ‘in beta’.
Slow March Of Progress

When we compare these Ray-Ban Display smart glasses to 2013’s Google Glass, when the Explorer Edition was made available in limited quantities to the public, it is undeniable that the processor guts in the Ray-Bans are more powerful, it’s got double the Flash storage, but the RAM is the same 2 GB, albeit faster LPRDDR4x. In terms of the display it’s slightly higher resolution and probably slightly better fidelity, but this still has to be tested.
Both have similar touch controls on the right side for basic control, with apparently the new wristband being the major innovation here. This just comes with the minor issue of now having to wear another wrist-mounted gadget that requires regular charging. If you are already someone who wears a smart watch or similar, then you better have some space on your other wrist to wear it.
One of the things that Google Glass and similar solutions have really struggled with – including Apple’s Vision AR gadget – is that of practical use cases. As cool as it can be to have a little head-mounted display that you can glance at surreptitiously, with nobody else around you being able to glance at the naughty cat pictures or personal emails currently being displayed, this never was a use case that convinced people into buying their own Google Glass device.
In the case of Meta’s smart glasses, they seem to bank on Meta AI integration, along with real-time captions for conversations in foreign languages. Awkward point here is of course that none of these features are impossible with a run-of-the-mill smartphone, and those can do even more, with a much larger display.
Ditto with the on-screen map navigation, which overlays a Meta Maps view akin to that of Google’s and Apple’s solutions to help you find your way. Although this might seem cool, you will still want to whip out your phone when you have to ask a friendly local when said route navigation feature inevitably goes sideways.
Amidst the scrambling for a raison d’être for smart glasses, it seems unlikely that society’s attitude towards ‘glassholes’ has changed either.
Welcome To The Panopticon

The idea behind the panopticon design, as created by Jeremy Bentham in the 18th century, is that a single person can keep an eye on a large number of individuals, all of whom cannot be certain that they are or are not being observed at that very moment. Although Bentham did not intent for it to be solely used with prisons and similar buildings, this is where it found the most uptake. Inspired by this design, we got more modern takes, such as the Telescreens in Orwell’s novel Nineteen-Eighty Four whose cameras are always on, but you can not be sure that someone is watching that particular screen.
In today’s modern era where cameras are basically everywhere, from CCTV cameras on and inside buildings, to doorbells and the personal surveillance devices we call ‘smartphones’, we also got areas where people are less appreciative of having cameras aimed on them. Unlike a smartphone where it’s rather obvious when someone is recording or taking photos, smart glasses aren’t necessarily that obvious. Although some do light up a LED or such, it’s easy to miss this sign.
In that article a TikTok video is described by a woman who was distraught to see that the person at the wax salon that she had an appointment at was wearing smart glasses. Unless you’re actively looking at and listening for the cues emitted by that particular brand of smart glasses, you may not know whether your waxing session isn’t being recorded in glorious full-HD or better for later sharing.
This is a concern that blew up during the years that Google Glass was being pushed by Google, and so far it doesn’t appear that people’s opinions on this have changed at all. Which makes it even more awkward when those smart glasses are your only prescription glasses that you have on you at the time. Do you still take them off when you enter a place where photography and filming is forbidden?
Dumber Smart Glasses
Although most of the focus in the media and elsewhere is on smart glasses like Google Glass and now Meta/Ray-Ban’s offerings, there are others too that fall under this umbrella term. Certain auto-darkening sunglasses are called ‘smart glasses’, while others are designed to act more like portable screens that are used with a laptop or other computer system. Then there are the augmented- and mixed-reality glasses, which come in a wide variety of forms and shapes. None of these are the camera-equipped types that we discussed here, of course, and thus do not carry the same stigma.
Whether Meta’s attempt where Google Glass failed will be more successful remains to be seen. If the criteria is that a ‘smart’ version of a device enhances it, then it’s hard to argue that a smart phone isn’t much more than just a cellular phone. At the same time the ‘why’ for cramming a screen and computer into a set of dorky glasses remains much harder to answer.
Feel free to sound off in the comments if you have a good use case for smart glasses. Ditto if you would totally purchase or have already purchased a version of the Ray-Ban Display smart glasses. Inquisitive minds would like to know whether this might be Google Glass’ redemption arch.
Considering how many countries in Europe that have laws regarding dashcams and similar, and certain countries forbid their use and even stored in the car when just passing through the country, when glasses like these become more common, I expect harsher laws against these kinds of glasses. The rest of the world will likely see many glassholes.
Not that many at this moment. Unless you mean Portugal and Luxemburg.
Some countries even relaxed the restrictions after court trials…
Oh well the EU making everyone’s life more annoying with poorly thought-out and mostly ineffective regulations is a given for any product rollout. I’m convinced their bureaucrats just get off on it at this point.
I’m not an expert on EU regulations, but my impression has been that at least the GDPR has mostly been a net (pun intended) positive?
Has it? I’d like to hear the arguments in favor, but the major effect people seem to notice is that now they have to hit “accept cookies” on every single webpage they ever visit every time instead of simply having that happen anyway (but invisibly.)
Or you can go into an intentionally time-wasting and labyrinthine menu to “select the cookies you want” which approximately zero percent of people try more than once.
Except that labyrinthine is not obeying the GDPR, that’s US companies making shit harder. Denying the cookies is supposed to be as easy as saying yes.
And if i have to click no to every cookie every time so be it, or then again, i just go somewhere else.
I guess I’m part of your zero percent? I do it every time. Your concept of a labyrinth must be different from mine, as I find it only takes a few seconds to do.
The cookie law was part of the DPD from 1998 and the GDPR regulations are superseding those since 2018.
So the cookie thing was active a decade before GDPR, although the required simplification and required options are new/updated.
They are also largely ignored or falsely implemented by most sites including EU sites.
Talking of which, they now require all phones to have a label on the packaging with a number of required information items like the time a battery lasts and a repairability score, so I recently watched a video of someone in Germany unboxing the Iphone 17 pro and it had that label, except it was inside the box and only accessible after breaking the seal.
It seems Apple WANTS to finance the EU but doesn’t want Tr*mp to find out I guess, so they do it through fines. Subtle! :)
Cookies are a mess – yes – they should have introduced a mandatory mechanism to automatically say NO – like what DNT header tried to achieve, but mandatory – you have that set? No banner, just treat that as NO. Bu they didn’t. Too bad.
Besides cookies, there are multiple parts of GDPR that are in effect and are good – for example you must be able to download all your data using SAR. Also if you want to delete account and all your data, they can’t say no. Used that multiple times.
Also, at least in my country, it forced many ecommerce sites to take security seriously (no eshop here process payment cards, so no PCI-DSS needed) – mere threat of large fine improved the dire situation from before significantly.
Pretty hyperbolic claim:- “Zero percent try more than once.” (I always choose no)
However, there are extensions like consent-o-matic for doing this automatically and seamlessly without problems.
I just hit reject all. Fast and easy.
Sites could not use cookies, could not log connections, then you would not have those popups. I think you’re annoyed at the wrong thing (or maybe not).
Also, in the EU these popups look even more crazy, you get to drill down with what companies your visit gets shared. It was around 300+, for some crappy 3rd grade news site. The effort and bandwidth needed to trade these bits of information is stupid. And like everything else, we pay for these “free” systems, just in a convoluted way.
How?
Because now even if I am in us I have to click through “accept all stupid cookies” box every time I visit every website? And it keeps doing it on many sites everytime! Sometimes more than once in a session. I was OK with only EU citizens being annoyed but no, it spread to everywhere!
As for dashcams, how stupid is that? What stops someone in the car from filming shit on their phone?
ah. lemme guess: still using chrome in stead of firefox ( or a deravitive) with a few handy extensions installed like “cookie selfdestruct” and “i still dont care about cookies”. besides “u block origine” of course. or do you walk in the rain without ubrella?
if i wanted to rejoice in the EU nanny state i would focus on USB chargers…isn’t some EU regulation why my kid’s school-issued ipad has a USB-C charger now? big win
We have plenty laws to keep us safe from the greedy corporations. I’d rather live with a few annoying laws that keep me and my fellas safe than live in unregulated country and pay hundreds of dollars for insuline, or get absolutely smashed by the hindu scammers.
And your idea of what GDPR is and giving your freedoms away just because you don’t want to spend half a second to accept what cookies you want and deny what you don’t want is laughable.
Do you enjoy being recorded? I don’t.
Your data is a bit outdated I think, yes Germany had a law against dashcams and google street view and such, but that has long be changed and they now have full street view too.
If you zoom out on google’ streetview and grab the little guy you’ll see that all of Europe is selectable, although Serbia seems a bit sparse.
The streets are also full of Teslas with all their cameras, where the caveat is that the privacy authority in the Netherlands for instance got Tesla to stop the ‘filming while parked’ guard thing since it was an intrusion of privacy and Tesla rolled out a local update for that I read. (Although I wonder if Tesla just said they did..)
Not that there aren’t all kinds of stipulations and more complex ones than the US I think, but the old complete ban is not present anymore AFAIK.
Nobody want’s to be recorded by a glASSHOLE.
It’s going to remain a tough sell – these smart-glasses are basically smart-watches with worse controls. It’s a tiny screen with a terrible interface. They need a big shift – maybe much better controls, maybe ditch the screen and only do music + cameras, maybe swap the tiny screen for a monochrome holographic waveguide with a huge FOV.
Did you not read up on how Meta does the controls? They have a wristband that detects you muscle impulses and everybody says it works perfectly, it reacts to specific movements of the hand. And you can even draw text with your finger and that also works it seems.
I figure it will even work with heavy tattoos, where smart watches fail to get the data through them.
You do need an arm though. There’s always something huh.
Oh it’s actually usable now! It can handle d-pad (4 direction) input, click 1, and click 2. That’s basically all they need. The writing looks acceptable but slower than touchscreen or voice – they’re writing 1 char at a time and waiting for it to be detected, reminds me of old OCR input.
This is decent. They got an interface that’s on-par with a watch and could go beyond. I was hoping they’d have a usable pointer like Apple VR. THAT is when this device could supplant smart phones.
You’re literally describing the Meta glasses.
Meta Ray Bans are already sold over 2 million units. On the fly language translation may be the killer app for the lower priced non visual AR glasses.
The US military is testing them too and is consider using them now I read.
This would be a great application, depending on how “on the fly” on the fly actually is. If it’s “wait 20 sec while the things sends the data off, gets it processed, and gets it back to read it slowly back to you” then it’s kinda lame, but still better than nothing.
Real simultaneous translators (like, the people) are still pretty amazing. They manage to buffer enough that they can listen and talk at the same time, in two languages.
Well, compare that to the use case for smart watches. Their target audience seem to be fitness freaks if you judge by the top features they brag about. As jewelry they are rubbish with their bland black faces, their battery life sucks, the screens are small and barely usable for anything. So not much worse than smart sunglasses. The sunglasses need to find equivalent killer features, however stupid – maybe they could count your breaths or control your blinking pattern, warn you if you don’t sit straight or if you had too many beers, that sort of stuff…
The key feature of a smart watch is that it vibrates when you get a message or a call, and you can read the text or the number right off the screen, then dismiss the call if you don’t want to answer it or make a quick “yes/no/maybe” reply to a text with pre-defined answers. It also features a clock, That means you’re not fishing your phone out of your pocket 200 times a day for one reason or the other, or annoying other people with ring tones and message beeps.
The rest of the functions are cheap rubbish and unreliable or difficult at best. With that in mind, you can find smart watches that aren’t bland pebbles with 10 hours of battery life – you just got to look down-market. There’s cheap and cheerful “smart” watches that have a battery life of 1-2 weeks and do everything you actually need for about 50 bucks.
“The key feature of a smart watch is that it vibrates when you get a message or a call, and you can read the text or the number right off the screen, then dismiss the call if you don’t want to answer it or make a quick “yes/no/maybe” reply to a text with pre-defined answers.”
There is also NFC for payments.
“There’s cheap and cheerful “smart” watches that have a battery life of 1-2 weeks”
Can they serve as 2FA key?
Hard to say. Depends on what?
Yes, but with a choice between Google pay and Apple pay, I choose no pay.
I’ve had other wallet systems available via payment QR on watch. Examples like Line Wallet, WeChat Wallet, AliPay, TrueMoney, etc. Many countries outside of Europe and NA don’t do tap to pay very commonly. These types of QR-linked payments often do not link to Google Wallet nor Apple Wallet systems.
My watch has an endless variety of faces and currently sports a customized LCARS design. The battery life is 5-7 days depending on how often the vibrator has to go off. As for being for fitness freaks, I don’t really see that. Aside from notifications, maps, and media controls I use mine to monitor my sleep hygiene and heart rate as well as my stress levels to help keep myself from getting burned out.
My glasses primarily replace my headphones and phone camera. It is faster and more convenient to take pictures for quick notes using my glasses, and having the open ear speakers I can still hear and be aware of my surroundings while listening to my music or on a call. Also the glasses don’t fall out of my ears like earbuds do.
And most importantly, health insurance pays for watches and glasses.
Orwell got a lot of things the wrong way around.
For instance, the cheap security bought by the HOAs around here will install cameras, but no recorders, so they don’t have to deal with the equipment and the legal hurdles. The CCTV feed is simply routed to some central office somewhere and there’s “someone” watching it, sometimes, maybe. Which of course means that you can be almost sure that nobody is actually watching and the cameras might as well be dummies.
Didn’t take long for the car thieves and burglars to figure it out either, especially when they mailed the tenants flyers saying that these are non-recording security cameras.
Security cameras have the additional swiss cheese hole of relying on police to actually investigate crimes.
I don’t live in a high crime area, but over the years I’ve gotten footage of a few people stealing packages off doorsteps, checking vehicles for locked doors and stealing the contents of unlocked vehicles, and I’ve dutifully filed reports and supplied links to the recorded footage. So far, they have never actually taken the step of accessing the footage supplied.
So given that:
1. There’s a good chance the feed is unrecorded/unmonitored
2. If it is monitored, for petty crimes people may not bother reporting it
3. If it does get reported, it will likely go uninvestigated
4. If it is investigated, it’s likely not positive identification will be made
5. If identification is made, even if a warrant is issued it’s unlikely an arrest will be made
6. If they are arrested, charged, found guilty, they’ll likely serve little to no time
It all seems a bit pointless. A better option might be going after the roots of societal problems. But that also seems unlikely.
It’s placebo security.
They might as well stop pretending and stop paying, but then people would become scared by the fact that there’s no police, no neighborhood watch, and not even cameras.
The fundamental problem is that there’s 1-2 people out of a 100 who are just anti-social and dumb, and will fall to drugs, alcohol, unemployment and petty crime regardless of your systemic attempts to help them. It’s these people who end up terrorizing neighborhoods for decades until they finally drop dead from substance abuse or fighting with other criminals.
Panopticon flew the coop ages ago We have two entire generations growing up right now whose social strategies are largely molded by the knowledge that all of their actions are permanently recorded and could be virally distributed at a moment’s notice. So they just do nothing. Pretty sad to watch.
At this point, a giant camera lens blatantly shoved in your face would actually represent an improvement, because at least it isn’t covert.
What would you do instead? Make a clown out of yourself and change nothing?
Did I say I’d do anything different in their situation? All I said is I think it’s sad that a couple generations are cooped up and don’t experience any of the childhood memories or adolescent relationships I enjoyed.
I agree. And today recordings are taken out of context too easily.
Even at work.
As they should. If enough of them do this, maybe the systems supporting the cameras will fall apart! Nature is healing.
I had to stop reading after laughing at “Part of the problem here is of course that most people do not generally wear glasses”
How about darkening glasses if you look at someone ugly!
You had better not look in a mirror then.
Ah, yes! The Peril Sensitive Sunglasses, then. Good choice. :D
I think the use case is as a prosthetic. Nowadays many people walk around and even have conversations while wearing AirPods. Are they translating? Recording? Using AI to tell them how to have a convo? The glasses will help people with memory issues, ADHD, various social clues etc and will become accepted that way.
Presumably they’ll do other things as well, but those “health” functions will be the Trojan horse.
They also sound handy for following geographical directions and service instructions.
Thank you! As someone with severe ADHD and short-term memory problems, I am thrilled to see smart glasses getting more common.
I finally have a pair of my own and it has been incredible to take pictures and have the AI set reminders for me on the fly while I’m doing other things. I go back over those pictures later and capture information in my notes that otherwise would have been missed.
Smart Cities, Smart Phones, Smart Glasses, Dumb Citizens.
Yeah the people who designed all those things also share a barely-concealed contempt for humanity.
“From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine.”
If I wanted Luddite op-eds/rage bait, I’d be on Reddit. This is not the content I come to hackaday for.
Also, the glasses are not ever covertly recording. They have a flashing LED that, when obstructed, stops and prevents recording. Meanwhile, cheaper Chinese “spy glasses” exist that do secretly record. Spying is not a valid argument against this product and if I end up getting assaulted for wearing them, I’m blaming publications like yours that intentionally mislead people for a few extra clicks.
The LED on raybans is easily disabled/bypassed. Flex your googlefu for youtubes explaining how. Spying is a valid argument against these products. Living under the constant watch of every douchbag hoping to catch some shamebait for clicks is dystopic. If you get assaulted for wearing a camera on your face where it is unwanted then youve only yourself and your assailant to blame. Fortunately Video Voyeurism laws are slowly creeping into the law books already. These sorts of gadgets will only drive them forward faster as people grow increasingly tired of the erosion of privacy.
Not what I read.
read more.
Disabling the LED is pretty darn hard because there’s a sensor that verifies the LED is not obscured and is lit before the camera engages.
As for worrying about getting filmed by douches, that’s an argument against smartphones and civilians owning video cameras, not an argument against glasses.
Disabling the LED isnt as hard as you think. Its been done through several different methods already.
And at least with video cameras and cell phones there is a degree of obviousness when they are filming. The argument against glasses is that they are NOT obvious. They are discrete and easily go unnoticed.
Glasshole detected
“Also, the glasses are not ever covertly recording. They have a flashing LED that, when obstructed, stops and prevents recording”
We have heard that about cameras on laptops and yet that was not hardware implemented feature. And both Alexa and Siri have no hardware “mic switch off” function. You focus on creeps that could secretly spy on few other people but forget that (not only) Mark’s company creeps on all of us already. Even if you leave home with not a single electronic device there will be billions of cameras and mics everywhere to track you anyway. They are even advertised as a source of feed for their AI – all you see and hear will be recorded and processed by AI. Of course only if you agree – they are known for respecting your decisions about privacy right?
Except the glasses don’t use a network connection during normal operation and can’t transmit images or video without manually connecting a phone.
Before you fearmonger, actually find out how the technology works.
Meanwhile your PHONE is always connected and listening.
Huh. Well, too bad!
Please stop making silly statements like this. All displays project an image on your retina. For head-mounted displays, all displays project an image toward your pupil, and, if your pupil is in the “eye box”, that image will project on to your retina.
I suppose you are trying to point out the fact that while most displays also project images everywhere one can see them from, a laser could theoretically project an image only to one specific point? I don’t know of any display that has done that. Even the original Microvision “retinal display” also required a beam spreader to to increase the size of the eye box, because, lacking pupil tracking, it required your eyes to be in a very specific place in order to see the output image.
This is just like using binoculars or microscopes, which also project images toward your eyes. If you pupils aren’t in the right place, you can’t see the projected image. If your pupils are in the right place, then the device is projecting an image into your retina. Does that make them remarkable?
In this case, they are talking about Retinal Projection technology (or, a Virtual Retinal Display). Most displays, including ones in many smart glasses, use a screen of some sort. The image is projected on to the display, bouncing the light to your eyes. Virtual Retinal Displays use your retina as the screen, projecting photons DIRECTLY into your eyes, instead of using a physical external screen.
It is rarely used, as far as i know, but has appeared in a few consumer devices, and is also used to help certain types of visual impairments.
I hope that helps clarify a bit!
And the obvious problem with the technology is that since the projection is not coming from a screen through a lens, but essentially from a laser pointer directed into your eye, if the system should fail in certain ways it could blast a single laser dot at your retina and burn a hole in it.
“if the system should fail in certain ways it could blast a single laser dot at your retina and burn a hole in it.”
Yes, The ever so common laser failure mode of increasing in power hundreds of times over standard operating parameters.
The laser in the retissa was measured at 0.316μW. Class 1 lasers, which are considered “eye safe” allow up to 390μW of red/green light. So their laser would have to BOOST by over 1200 times its ordinary power to hit class 2, which wouldnt “burn a hole” in your eye.
Class 2 lasers direct viewing can cause temporary or permanent eye damage, such as:
Flash blindness: Temporary loss of vision due to overwhelming light
Retinal burns: Small, localized burns on the retina
Macular degeneration: Damage to the central part of the retina responsible for sharp vision
But go ahead and bang your drum chicken little.
i agree that seems like an unlikely (but not impossible) failure mode. but a more likely failure mode is that the laser stops moving, so it’s blasting its 0.316uW directly at one point for potentially hours if you don’t notice. of course, not exactly one point because your eye’s always moving.
shrug it makes me nervous. if there were million of people using it every day without incident, i’d trust it. today, it seems like you’re in the experimental group
The Microvision scanned laser display had multiple fail-safes built in that essentially cut power to the lasers if the mirror stopped scanning. Also, as I mentioned, it was not shining a single laser dot, but an expanded beam the size of the scanning mirror.
“it was not shining a single laser dot, but an expanded beam the size of the scanning mirror.”
An expanded beam the size of the scanning mirror is still a single laser dot. Their systems used a 1.5mm and later a 2mm diameter mirror. While the beam was expanded to near that diameter for beam steering, any implementation that involved direct retinal projection would have required focusing that beam down to a much smaller diameter (concentrating its power) or the perceived resolution would have been pathetic.
“any implementation that involved direct retinal projection would have required focusing that beam down to a much smaller diameter (concentrating its power) or the perceived resolution would have been pathetic.”
That’s not the case. The perception of a pixel comes from seeing (nearly) parallel rays of light coming from the same direction. You really want as wide a distribution of those rays as you can get in order to be sure that some of them enter the pupil.
What distinguishes one pixel from another is the angle that its rays enter your eye from. In the Microvision display, this was determined by the deflection of the mirror along with the lenses that made the diverging beam converge again towards the eye.
If that isn’t clear, then consider a dot in the upper left of your monitor. Light from that dot spreads over your entire face and more. The lens of your eye is converging all the rays from that dot (that enter your pupil) down to a single spot on your retina. The dot on the screen that’s next to the first is doing the same thing, but the only difference is that its rays are entering your eye from a different angle, and thus your lens is converging them down to a different spot.
Of course, if the monitor is not infinitely far away, the rays from any given dot are spreading out rather than all parallel, so it’s not totally accurate to say that only the angle differs. It’s a simplification.
@cityzen
“That’s not the case. The perception of a pixel comes from seeing (nearly) parallel rays of light coming from the same direction.”
The average retina is roughly 1,094 square mm. If you are sweeping a 2mm diameter beam across that you have a very low number of positions that the beam can hit. If you focus the beam down to .2mm you increase the potential perceived resolution significantly. You want as NARROW a point of focus on the retina as possible to maximize the perceived resolution.
The light coming from a screens pixel is not the same as a focused laser beam. The beam isnt spreading across your whole face with your eye just catching its fair share, With direct retinal projection the laser is raster scanning across your retina directly turning each color beam on and off at the appropriate time to “draw” an image ON YOUR RETINA.
I’m very familiar with many varieties of display optical systems. In most cases, a “screen” is just a handy device to disperse light rays in many directions such that viewers in many directions can see the image. With head-mounted displays, screens are not required, since you know approximately where the viewer’s eye will be. You still need a way to generate an image that can be seen from anywhere the users’ pupils will be, however, and screens are one way to achieve that.
In no case, however, is there just a point-like laser beam representing a single pixel being shown into anyone’s eyes. The Microvision display with the scanned laser required a beam spreader to enlarge the beam to pupil-size, which still was much too small for practical use (without accurate pupil tracking and targeting). The beam still has to pass through an optical system such that each pixel appears to the eye from the appropriate direction & distance. That optical system still has a virtual screen that represents the apparent origin surface of all the pixels.
Also, what if you consider a display system that uses a transparent LCD, where the light is shown through the rear of the LCD and the resulting photons go DIRECTLY into your eyes? (Of course it’s not direct; lenses are still required to make each pixel appear to come from the appropriate direction and distance.)
The analogy of binoculars and microscopes also still holds: are they devices that project photons directly into your retinas?
“In no case, however, is there just a point-like laser beam representing a single pixel being shown into anyone’s eyes.”
Confidently Incorrect
Google Retissa. No screen. No simulated screen. DIRECT LASER RETINAL PROJECTION.
This is exactly the same base tech as the original Microvision display. Their diagram shows a point laser beam scanning into the pupil. They don’t say how they solve the pupil tracking problem. So if you turn your eyes a few millimeters in any direction, or if the display shifts on your head a few millimeters, then the image is lost.
It is true that if you shine a point beam instead of a spread beam into the eye, you can avoid having to deal with vision problems such as nearsightedness or farsightedness, since the lens doesn’t have to focus any wavefronts.
There is still a virtual screen in the optical system. A virtual screen is not a simulated screen. It is a representation of the apparent position in space where the rays appear to emanate from.
I’ll say it again: all displays project an image directly into your retinas. These scanned laser displays just don’t send the image anywhere else at the same time.
@citizen
They dont solve the pupil tracking problem. The screen is ONLY clearly visible when looking straight ahead. Direct retinal projection would require an external device that moved in sync with the eye to increase FOV beyond that.
“I’ll say it again: all displays project an image directly into your retinas.”
Youre a little slow on the uptake.
If I use a laser projector to spray an image on my wall, and I look at it then the image is NOT being projected DIRECTLY into my retina. It is being DIRECTLY projected on the wall, reflected, and then a portion of that energy is being received by my retina.
If I use a birdbath optic system to reflect an OLED screen into my FOV, I am NOT directly projecting onto my retina. I am indirectly projecting onto my retina as the image is reflected into my eyes.
If I focus a laser projector on my retina DIRECTLY then ALL of the lasers output is being projected DIRECTLY onto my retina.
See the difference now?
This forum is not conducive to this type of conversation.
Oddly cryptic comment. Care to elaborate?
its the same thing as when everyone started wearing bluetooth headsets. and if you didn’t see it you would think they were schizo. this just makes the situation worse. this is why i leave my computers at home.
Please consider the life changing impacts this tech could have for those less able.
While I appreciate that we need articles like this to question tech developments and debate their use on society, I think balance and nuance is required. It’s not all bad.
Watching the cringey launch event and the features I could help but think of people I know with disabilities of one kind or other finding those features incredibly life changing. Having someone’s words transcribed in real time – game changing for some deaf people – hell, just my hard-of-hearing dad.
Navigating while controlling a wheelchair with both hands? Easy.
Navigating while biking! That applies to everyone who can cycle!
But also my specific techie use case is a job I have to do that involves maintaining a large inventory of devices (600+) which require two hands to access/unplug/carry/dismantle. A pair of these glasses removes the hassle of taking out my phone scanning a barcode, tapping a few times to get the information, then putting it away, carrying the item etc and having to do this multiple times during a maintenance session – this would be a game changer.
I think the hype is overblown, but some of these functions are going to be just too useful to not stick around – it’s just going to be “niche” for a long time – not instantly mainstream.
Tech like this HAVE uses. Such as in inventory, tech support etc. However, releasing it in general public is an entirely different animal
That’s basically why Google tried to relaunch Google Glass for corporations, to focus on such tasks like warehouse/asset management. Apparently it was not a viable enough market, however.
Much like Microsoft’s HoloLens it seems that these technologies will remain very niche and resultingly prohibitively expensive. Which is a shame for the few valid use cases where it can improve lives. It basically explains why assistive technology usually comes with high price tags…
i agree with your thrust. and especially for like certified technical tasks (like airplane assembly or repair), it could be game-changing to even slightly reduce the overhead of logging and checklisting. but one specific example upsets me
you’re right that there are obvious benefits to biking with a heads up display — navigation, etc. a lot of people definitely do want this feature. but to me, that seems like a nightmare. being hands-free doesn’t eliminate the distraction. when i’m biking, i’m focused on the road and the scenery and nearby traffic. that’s a very specific mental exercise. and reading a map on a screen is a very different mental exercise. the meditation is (often) the point of human activity.
for me, the answer is easy, just don’t use it. i mean, airpods don’t hurt me because i had already developed a mild (and effective) revulsion to headphones when i’m out. but i am a little concerned about the context we’re creating for future generations. at best, we’re giving them the low-hanging fruit of “if you want to try something really trippy, just take off your glasses for the first time since you were 3”. at worst, we’re creating the built environment that they will impotently chafe against.
and yeah i’m assuming an always-on personal network interface will become the norm as it becomes ever more technologically plausible
Obvious use case: cyclists / E-Scooters / runners etc.
Look at the smart sport glasses they launched alongside this, and then imagine incorporating a transparent display and smart watch / wristband.
Now you can leave your phone at home so nothing’s dangling in your pocket, you can get heart rate etc data from the wristband, you’ve got a display that can display turn by turn navigation, currently playing song, trip progress, rearview camera, etc. The physical glasses themselves block wind etc from your eyes, and they double as open ear speakers.
At the same time, you don’t really want to present a lot of visual info that will make the cyclist want to take their eyes off the road ahead. Most of that info can be presented in an auditory fashion.
Ah yes, I love having maps described to me by a narrator. Much less distracting than a translucent HUD.
They could have helmet cam footage from the Tour de France peloton (with all the cheering bystanders) projected whilst they ride along.
I realized something.
Smart glasses could be useful if they could do all the AR magic they try to tease and give you private audio too but the tech isn’t near there yet.
So the only reason they could be pushing this unfinished tech is just to siphon up your info and use their cameras to siphon up other’s too.
I mean I thought they’d do that and it would be a problem but I suddenly realized how blatant it is that that is the only real reason they are doing it at all since it doesn’t really do anything for you.
yes, but maybe not as directly. Getting you hooked on social media and strapping a communication device to your face will inherently produce more data than a passive user.
you’re surely right that they are spying on their users but i reject “only reason”. they’re also trying to prop up their stock price by looking like they’re an innovative company capable of boundless future growth and thus worthy of a speculative high stock valuation today
I could see these replacing the gopro for sports. I would buy them just for that. Gopro can be a hassle, but if you’re already wearing sports eyewear… Also, smart goggles for alpine sports. The resorts will be killing grounds then (more than now) with everyone only looking at their eye displays.
Why must they still be so chunky?
Because they contain a smart phone’s worth of electronics in them?
Joe 90’s glasses just had a capacitor and zigzag wire in each arm, and look at all the things he managed to accomplish!
So now I can walk around with my EE Google GLASS safely?
I think a neat use case for these would be giving talks – a fancy teleprompter.
I imagine a powerpoint integration. As you hit off points in your talks, items are checked off. If you’re stuck/freeze, it could provide a suggestion.
And it would be an interesting aid for a Q&A session.
You see someone wearing these? Explain that they have to give you their details for your Subject Access Request.
Applications where these types of glasses would be useful is warehouses, search and rescue, surgery suites, field repair technicians, television style interviews (for the host), things like that. But like any technology, down to pencils, they can be abused or weaponized. Ask John Wick about pencils :)
So meta-glassholes.
Aha, I imagine drive-by (or walk-by) lawyers wearing these at all times, and finding all kinds of legal crevices they can sink their lawyering tenticles into, pry these wide enough to extract them a million greenbucks for breaking some kind of unenforceable laws nobody knew existed.
Tho, I imagine there may be something good out of these when worn by average Sam, for example, replays of “Baywatch” while listening to mom-in-law planning the rest of my life for me.