Home Brew Augmented Reality

In July of 2016 a game was released that quickly spread to every corner of the planet. Pokemon Go was an Augmented Reality game that used a smart phone’s GPS location and camera to place virtual creatures into the person’s real location. The game was praised for its creativity and was one of the most popular and profitable apps in 2016. It’s been download over 500 million times since.

Most of its users were probably unaware that they were flirting with a new and upcoming technology called Augmented Reality. A few day ago, [floz] submitted to us a blog from a student who is clearly very aware of what this technology is and what it can do. So aware in fact that they made their own Augmented Reality system with Python and OpenCV.

In the first part of a multi-part series – the student (we don’t know their name) walks you through the basic structure of making a virtual object appear on a real world object through a camera. He 0r she gets into some fairly dense math, so you might want to wait until you have a spare hour or two before digging into this one.

Thanks to [floz] for the tip!

Hackaday Prize Entry: Telepresence With The Black Mirror Project

The future is VR, or at least that’s what it was two years ago. Until then, there’s still plenty of time to experiment with virtual worlds, the Metaverse, and other high-concept sci-fi tropes from the 80s and 90s. Interactive telepresence is what the Black Mirror Project is all about. Their plan is to create interactive software based on JanusVR platform for creating immersive VR experiences.

The Black Mirror project makes use of the glTF runtime 3D asset delivery to create an environment ranging from simple telepresence to the mind-bending realities the team unabashedly compares to [Neal Stephenson]’s Metaverse.

For their hardware implementation, the team is looking at UDOO X86 single-board computers, with SSDs for data storage as well as a bevy of sensors — gesture, light, accelerometer, magnetometer — supplying the computer with data. There’s an Intel RealSense camera in the build, and the display is unlike any other VR setup we’ve seen before. It’s a tensor display with multiple projection planes and variable backlighting that has a greater depth of field and wider field of view than almost any other display.

Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor

Echolocation projects typically rely on inexpensive distance sensors and the human brain to do most of the processing. The team creating SNAP: Augmented Echolocation are using much stronger computational power to translate robotic vision into a 3D soundscape.

The SNAP team starts with an Intel RealSense R200. The first part of the processing happens here because it outputs a depth map which takes the heavy lifting out of robotic vision. From here, an AAEON Up board, packaged with the RealSense, takes the depth map and associates sound with the objects in the field of view.

Binaural sound generation is a feat in itself and works on the principle that our brains process incoming sound from both ears to understand where a sound originates. Our eyes do the same thing. We are bilateral creatures so using two ears or two eyes to understand our environment is already part of the human operating system.

In the video after the break, we see a demonstration where the wearer doesn’t need to move his head to realize what is happening in front of him. Instead of a single distance reading, where the wearer must systematically scan the area, the wearer simply has to be pointed the right way.

Another Assistive Technology entry used the traditional ultrasonic distance sensor instead of robotic vision. There is even a version out there for augmented humans with magnet implants covered in Cyberpunk Yourself called Bottlenose.

Continue reading “Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor”

CastAR Shuts Doors

Polygon reports CastAR is no more.

CastAR is the brainchild of renaissance woman [Jeri Ellsworth], who was hired by Valve to work on what would eventually become SteamVR. Valve let [Jeri] go, but allowed her to take her invention with her. [Jeri] founded a new company, Technical Illusions, with [Rick Johnson] and over the past few years the CastAR has appeared everywhere from Maker Faires to venues better focused towards innovative technologies.

In 2013, Technical Illusions got its start with a hugely successful Kickstarter, netting just north of one million dollars. This success drew the attention of investors and eventually led to a funding round of $15 million. With this success, Technical Illusions decided to refund the backers of its Kickstarter.

We’ve taken a look a CastAR in the past, and it’s something you can only experience first-hand. Unlike the Oculus, Google Cardboard, or any of the other VR plays companies are coming out with, CastAR is an augmented reality system that puts computer-generated objects in a real, physical setting. Any comparison between CastAR and a VR system is incomplete; these are entirely different systems with entirely different use cases. Think of it as the ultimate table top game, or the coolest D&D game you could possibly imagine.

Sharing Virtual And Holographic Realities Via Vive And Hololens

An experimental project to mix reality and virtual reality by [Drew Gottlieb] uses the Microsoft Hololens and the HTC Vive to show two users successfully sharing a single workspace as well as controllers. While the VR user draws cubes in midair with a simple app, the Hololens user can see the same cubes being created and mapped to a real-world location, and the two headsets can even interact in the same shared space. You really need to check ou the video, below, to fully grasp how crazy-cool this is.

Two or more VR or AR users sharing the same virtual environment isn’t new, but anchoring that virtual environment into the real world in a way that two very different headsets share is interesting to see. [Drew] says that the real challenge wasn’t just getting the different hardware to talk to each other, it was how to give them both a shared understanding of a common space. [Drew] needed a way to make that work, and you can see the results in the video embedded below.

Continue reading “Sharing Virtual And Holographic Realities Via Vive And Hololens”

Projection Mapping In Motion Amazes

Projection mapping is pretty magical; done well, it’s absolutely miraculous when the facade of a building starts popping out abstract geometric objects, or crumbles in front of our very eyes. “Dynamic projection mapping onto deforming non-rigid surface” takes it to the next level. (Watch the video below.)

A group in the Ishikawa Watanabe lab at the University of Tokyo has a technique where they cover the target with a number of dots in an ink that is only visible in the infra-red. A high-speed (1000 FPS!) camera and some very fast image processing then work out not only how the surface is deforming, but which surface it is. This enables them to swap out pieces of paper and get the projections onto them in real time.

Continue reading “Projection Mapping In Motion Amazes”

Hackaday Prize Entry: Raspberry Pi Zero Smart Glass

Some of the more interesting consumer hardware devices of recent years have been smart glasses. Devices like Google Glass or Snapchat Spectacles, eyewear incorporating a display and computing power to deliver information or provide augmented reality on an unobtrusive wearable platform.

Raspberry Pi Zero Smart Glass aims to provide an entry into this world, with image recognition and OCR text recognition in a pair of glasses courtesy of a Raspberry Pi Zero. Unusually though it does not take the display option of other devices of having a mirror or prism in the user’s field of view, instead it replaces the user’s entire field of view with a display and re-connects them to the world through the Raspberry Pi camera.

The display in question is an inexpensive set of “3D Virtual Stereo Digital Video glasses”, of the type that can be found fairly easily on your favourite auction site. They aren’t particularly high-resolution, but the Pi can easily drive them with its composite video output. The electronics and camera are mounted on a headband, in a custom 3D-printed enclosure. All files can be downloaded from the project page.

There is some Python software, but it’s fair to say that there is not a clear demo on the project page showing it working. However this is no reason to disregard this project, because even if its software has yet to achieve its full potential there is value elsewhere. The 3D-printed Raspberry Pi enclosure should be of use to many other similar wearable projects, and we’d almost say it’s worthy of a project all of its own.