Pokemon Go Had Players Capturing More Than They Realized

Released in 2016, Pokemon Go quickly became a worldwide phenomenon. Even folks who weren’t traditionally interested in the monster-taming franchise were wandering around with their smartphones out, on the hunt for virtual creatures that would appear via augmented reality. Although the number of active users has dropped over the years, it’s estimated that more than 50 million users currently log in and play every month.

From a gameplay standpoint, Go is brilliant. Although the Pokemon that players seek out obviously aren’t real, searching for them closely approximates the in-game experience that the franchise has been known for since its introduction on the Game Boy back in 1996.

But now, instead of moving a character through a virtual landscape in search of the elusive “pocket monsters”, players find them dotted throughout the real world. To be successful, players need to leave their homes and travel to where the Pokemon are physically located — which often happens to be a high-traffic area or other point of interest.

As a game, it’s hard to imagine Pokemon Go being a bigger success. At the peak of its popularity, throngs of players were literally causing traffic jams as they roamed the streets in search of invisible creatures. But what players may not have realized as they scanned the world around them through the game was that they were helping developer Niantic build something even more valuable.

Continue reading Pokemon Go Had Players Capturing More Than They Realized”

Augmented Reality Project Utilizes The Nintendo DSi

[Bhaskar Das] has been tinkering with one of Nintendo’s more obscure handhelds, the DSi. The old-school console has been given a new job as part of an augmented reality app called AetherShell. 

The concept is straightforward enough. The Nintendo DSi runs a small homebrew app which lets you use the stylus to make simple line drawings on the lower touchscreen. These drawings are then trucked out wirelessly as raw touch data via UDP packets, and fed into a Gemini tool geometric reconstruction script written in Python which transforms them into animation frames. A Gemini tool is used to classify what the drawings are in order for a future sound effects upgrade, too. These are then sent to an iPhone app, which uses ARKit APIs and the phone’s camera to display the animations embedded into the surrounding environment via augmented reality.

One might question the utility of this project, given that the iPhone itself has a touch screen you can draw on, too. It’s a fair question, and one without a real answer, beyond the fact that sometimes it’s really fun to play with an old console and do weird things with it. Plus, there just isn’t enough DSi homebrew out in the world. We love to see more.

Continue reading “Augmented Reality Project Utilizes The Nintendo DSi”

Illustrated Kristina with an IBM Model M keyboard floating between her hands.

Keebin’ With Kristina: The One With The C64 Keyboard

[Jean] wrote into the tips line (the system works!) to let all of us know about his hacked and hand-wired C64 keyboard, a thing of beauty in its chocolate-brown and 9u space bar-havin’ glory.

A C64 keyboard without the surrounding C64.
Image by [Jean] via GitHub
This Arduino Pro Micro-based brain transplant began as a sketch, and [Jean] reports it now has proper code in QMK. But how is a person supposed to use it in 2025, almost 2026, especially as a programmer or just plain serious computer user?

The big news here is that [Jean] added support for missing characters using the left and right Shift keys, and even added mouse controls and Function keys that are accessed on a layer via the Shift Lock key. You can see the key maps over on GitHub.

I’ll admit, [Jean]’s project has got me eyeing that C64 I picked up for $12 at a thrift store which I doubt still works as intended. But don’t worry, I will test it first.

Fortunately, it looks like [Jean] has thought of everything when it comes to reproducing this hack, including the requisite C64-to-Arduino pinout. So, what are you waiting for?

Continue reading “Keebin’ With Kristina: The One With The C64 Keyboard”

Brilliant Labs Has New Smart Glasses, With A New Display

Brilliant Labs have been making near-eye display platforms for some time now, and they are one of the few manufacturers making a point of focusing on an open and hacker-friendly approach to their devices. Halo is their newest smart glasses platform, currently in pre-order (299 USD) and boasting some nifty features, including a completely new approach to the display.

Development hardware for the Halo display. The actual production display is color, and integrated into the eyeglasses frame.

Halo is an evolution of the concept of a developer-friendly smart glasses platform intended to make experimentation (or modification) as accessible as possible. Compared to previous hardware, it has some additional sensors and an entirely new approach to the display element.

Whereas previous devices used a microdisplay and beam splitter embedded into a thick lens, Halo has a tiny display module that one looks up and into in the eyeglasses frame. The idea appears to be to provide the user with audio (bone-conduction speakers in the arms of the glasses) as well as a color “glanceable” display for visual data.

Some of you may remember Brilliant Labs’ Monocle, a transparent, self-contained, and wireless clip-on display designed with experimentation in mind. The next device was Frame, which put things into a “smart glasses” form factor, with added features and abilities.

Halo, being in pre-release, doesn’t have full SDK or hardware details shared yet. But given Brilliant Labs’ history of fantastic documentation for their hardware and software, we’re pretty confident Halo will get the same treatment. Want to know more but don’t wish to wait? Checking out the tutorials and documentation for the earlier devices should give you a pretty good idea of what to expect.

Supercon 2024: Photonics/Optical Stack For Smart-Glasses

Smart glasses are a complicated technology to work with. The smart part is usually straightforward enough—microprocessors and software are perfectly well understood and easy to integrate into even very compact packages. It’s the glasses part that often proves challenging—figuring out the right optics to create a workable visual interface that sits mere millimeters from the eye.

Dev Kennedy is no stranger to this world. He came to the 2024 Hackaday Supercon to give a talk and educate us all on photonics, optical stacks, and the technology at play in the world of smart glasses.

Continue reading “Supercon 2024: Photonics/Optical Stack For Smart-Glasses”

Octet Of ESP32s Lets You See WiFi Like Never Before

Most of us see the world in a very narrow band of the EM spectrum. Sure, there are people with a genetic quirk that extends the range a bit into the UV, but it’s a ROYGBIV world for most of us. Unless, of course, you have something like this ESP32 antenna array, which gives you an augmented reality view of the WiFi world.

According to [Jeija], “ESPARGOS” consists of an antenna array board and a controller board. The antenna array has eight ESP32-S2FH4 microcontrollers and eight 2.4 GHz WiFi patch antennas spaced a half-wavelength apart in two dimensions. The ESP32s extract channel state information (CSI) from each packet they receive, sending it on to the controller board where another ESP32 streams them over Ethernet while providing the clock and phase reference signals needed to make the phased array work. This gives you all the information you need to calculate where a signal is coming from and how strong it is, which is used to plot a sort of heat map to overlay on a webcam image of the same scene.

The results are pretty cool. Walking through the field of view of the array, [Jeija]’s smartphone shines like a lantern, with very little perceptible lag between the WiFi and the visible light images. He’s also able to demonstrate reflection off metallic surfaces, penetration through the wall from the next room, and even outdoor scenes where the array shows how different surfaces reflect the signal. There’s also a demonstration of using multiple arrays to determine angle and time delay of arrival of a signal to precisely locate a moving WiFi source. It’s a little like a reverse LORAN system, albeit indoors and at a much shorter wavelength.

There’s a lot in this video and the accompanying documentation to unpack. We haven’t even gotten to the really cool stuff like using machine learning to see around corners by measuring reflected WiFi signals. ESPARGOS looks like it could be a really valuable tool across a lot of domains, and a heck of a lot of fun to play with too.

Continue reading “Octet Of ESP32s Lets You See WiFi Like Never Before”

FPV Flying In Mixed Reality Is Easier Than You’d Think

Flying a first-person view (FPV) remote controlled aircraft with goggles is an immersive experience that makes you feel as if you’re really sitting in the cockpit of the plane or quadcopter. Unfortunately, while your wearing the goggles, you’re also completely blind to the world around you. That’s why you’re supposed to have a spotter nearby to keep watch on the local meatspace while you’re looping through the air.

But what if you could have the best of both worlds? What if your goggles not only allowed you to see the video stream from your craft’s FPV camera, but you could also see the world around you. That’s precisely the idea behind mixed reality goggles such as Apple Vision Pro and Meta’s Quest, you just need to put all the pieces together. In a recent video [Hoarder Sam] shows you exactly how to pull it off, and we have to say, the results look quite compelling.

Continue reading “FPV Flying In Mixed Reality Is Easier Than You’d Think”