There are a few very different pathways to building a product, and we gotta applaud the developers taking care to take the open-source path. Today’s highlight is [Mentra], who is releasing an open-source smart glasses OS for their own and others’ devices, letting you develop your smart glasses ideas just once, a single codebase applicable for multiple models.
Currently, the compatibility list covers four models, two of them Mentra’s (Live and Mach 1), one from Vuzix (Z100), and one from Even Realities (G1) — some display-only, and some recording-only. The app store already has a few apps that cover the basics, the repository looks lively, and if the openness is anything to go by, our guess is that we’re sure to see more.
Connected devices are ubiquitous in our era of wireless chips heavily relying on streaming data to someone else’s servers. This sentence might already start to sound dodgy, and it doesn’t get better when you think about today’s smart glasses, like the ones built by Meta (aka Facebook).
[sh4d0wm45k] doesn’t shy away from fighting fire with fire, and shows you how to build a wireless device detecting Meta’s smart glasses – or any other company’s Bluetooth devices, really, as long as you can match them by the beginning of the Bluetooth MAC address.
[sh4d0wm45k]’s device is a mini light-up sign saying “GLASSHOLE”, that turns bright white as soon as a pair of Meta glasses is detected in the vicinity. Under the hood, a commonly found ESP32 devboard suffices for the task, coupled to two lines of white LEDs on a custom PCB. The code is super simple, sifting through packets flying through the air, and lets you easily contribute with your own OUIs (Organizationally Unique Identifier, first three bytes of a MAC address). It wouldn’t be hard to add such a feature to any device of your own with Arduino code under its hood, or to rewrite it to fit a platform of your choice.
We’ve been talking about smart glasses ever since Google Glass, but recently, with Meta’s offerings, the smart glasses debate has reignited. Due to inherent anti-social aspects of the technology, we can see what’d motivate one to build such a hack. Perhaps, the next thing we’ll see is some sort of spoofed packets shutting off the glasses, making them temporarily inoperable in your presence in a similar way we’ve seen with spamming proximity pairing packets onto iPhones.
The geometric waveguide glass of the Meta Ray-Ban Display glasses. (Credit iFixit)
Recently the avid teardown folk over at iFixit got their paws on Meta’s Ray-Ban Display glasses, for a literal in-depth look at these smart glasses. Along the way they came across the fascinating geometric waveguide technology that makes the floating display feature work so well. There’s also an accompanying video of the entire teardown, for those who enjoy watching a metal box cutter get jammed into plastic.
Overall, these smart glasses can be considered to be somewhat repairable, as you can pry the arms open with a bit of heat. Inside you’ll find the 960 mWh battery and a handful of PCBs, but finding spare parts for anything beyond perhaps the battery will be a challenge. The front part of the glasses contain the antennae and the special lens on the right side that works with the liquid crystal on silicon (LCoS) projector to reflect the image back to your eye.
While LCoS has been used for many years already, including Google Glass, it’s the glass that provides the biggest technological advancement. Instead of the typical diffractive waveguide it uses a geometric reflective waveguide made by Schott, with the technology developed by Lumus for use in augmented reality (AR) applications. This is supposed to offer better optical efficiency, as well as less light leakage into or out of the waveguide.
Although definitely impressive technology, the overall repairability score of these smart glasses is pretty low, and you have to contest with both looking incredibly dorky and some people considering you to be a bit of a glasshole.
It’s becoming somewhat of a running gag that any device or object will be made ‘smart’ these days, whether it’s a phone, TV, refrigerator, home thermostat, headphones or glasses. This generally means somehow cramming a computer, display, camera and other components into the unsuspecting device, with the overarching goal of somehow making it more useful to the user and not impacting its basic functionality.
Although smart phones and smart TVs have been readily embraced, smart glasses have always been a bit of a tough sell. Part of the problem here is of course that most people do not generally wear glasses, between people whose vision does not require correction and those who wear e.g. contact lenses. This means that the market for smart glasses isn’t immediately obvious. Does it target people who wear glasses anyway, people who wear sunglasses a lot, or will this basically move a smart phone’s functionality to your face?
Smart glasses also raise many privacy concerns, as their cameras and microphones may be recording at any given time, which can be unnerving to people. When Google launched their Google Glass smart glasses, this led to the coining of the term ‘glasshole‘ for people who refuse to follow perceived proper smart glasses etiquette.
Brilliant Labs have been making near-eye display platforms for some time now, and they are one of the few manufacturers making a point of focusing on an open and hacker-friendly approach to their devices. Halo is their newest smart glasses platform, currently in pre-order (299 USD) and boasting some nifty features, including a completely new approach to the display.
Development hardware for the Halo display. The actual production display is color, and integrated into the eyeglasses frame.
Halo is an evolution of the concept of a developer-friendly smart glasses platform intended to make experimentation (or modification) as accessible as possible. Compared to previous hardware, it has some additional sensors and an entirely new approach to the display element.
Whereas previous devices used a microdisplay and beam splitter embedded into a thick lens, Halo has a tiny display module that one looks up and into in the eyeglasses frame. The idea appears to be to provide the user with audio (bone-conduction speakers in the arms of the glasses) as well as a color “glanceable” display for visual data.
Some of you may remember Brilliant Labs’ Monocle, a transparent, self-contained, and wireless clip-on display designed with experimentation in mind. The next device was Frame, which put things into a “smart glasses” form factor, with added features and abilities.
Halo, being in pre-release, doesn’t have full SDK or hardware details shared yet. But given Brilliant Labs’ history of fantastic documentation for their hardware and software, we’re pretty confident Halo will get the same treatment. Want to know more but don’t wish to wait? Checking out the tutorials and documentation for the earlier devices should give you a pretty good idea of what to expect.
Smart glasses are a complicated technology to work with. The smart part is usually straightforward enough—microprocessors and software are perfectly well understood and easy to integrate into even very compact packages. It’s the glasses part that often proves challenging—figuring out the right optics to create a workable visual interface that sits mere millimeters from the eye.
You normally think of smart glasses as something you wear as either an accessory or, if you need a little assistance, with corrective lenses. But [akhilnagori] has a different kind of smart eyewear. These glasses scan and read text in the user’s ear.
This project was inspired by a blind child who enjoyed listening to stories but could not read beyond a few braille books. The glasses perform the reading using a Raspberry Pi Zero 2 W and a machine learning algorithm.