Echolocation projects typically rely on inexpensive distance sensors and the human brain to do most of the processing. The team creating SNAP: Augmented Echolocation are using much stronger computational power to translate robotic vision into a 3D soundscape.
The SNAP team starts with an Intel RealSense R200. The first part of the processing happens here because it outputs a depth map which takes the heavy lifting out of robotic vision. From here, an AAEON Up board, packaged with the RealSense, takes the depth map and associates sound with the objects in the field of view.
Binaural sound generation is a feat in itself and works on the principle that our brains process incoming sound from both ears to understand where a sound originates. Our eyes do the same thing. We are bilateral creatures so using two ears or two eyes to understand our environment is already part of the human operating system.
In the video after the break, we see a demonstration where the wearer doesn’t need to move his head to realize what is happening in front of him. Instead of a single distance reading, where the wearer must systematically scan the area, the wearer simply has to be pointed the right way.
Another Assistive Technology entry used the traditional ultrasonic distance sensor instead of robotic vision. There is even a version out there for augmented humans with magnet implants covered in Cyberpunk Yourself called Bottlenose.
Wow, really neat project. It also looks like the documentation for the Intel RealSense and the AAEON Up board was not bad.
So cool! Gotta try! BC I’m not blind, I’ll just put it on backwards!
Geordi La Forge’s glasses are a fram air filter from a 70’s vintage car. If you want his headgear, hit Autozone.
Thanks! I’ll remember that for Proto II.
…I always thought they were a hair clip and some paint and greeblies…?
Which one? I don’t recall those back in the day
“Almost” means sans the direct brain interface. Details, details…