We are accustomed to medical devices being expensive, but sometimes the costs seem to far exceed reasonable expectations. At its most simplistic, a hearing aid should just be a battery, microphone, amplifier, and speaker, all wrapped in an enclosure, right? These kinds of parts can be had for a few dimes, so why do modern hearing aids cost thousands of dollars, and why can’t they seem to go down in price?
If you’ve ever seen an experienced radio operator pull a signal out of the noise, or talked to someone in a crowded noisy restaurant, you know the human brain is excellent at focusing on a particular sound. This is sometimes called the cocktail party effect and if you wear a hearing aid, this doesn’t work as well because the device amplifies everything the same. A German company, Fraunhofer, aims to change that. They’ve demonstrated a hearing aid that uses EEG sensors to determine what you are trying to hear. Then it uses that information to configure beamforming microphone arrays to focus in on the sound you want to hear.
In addition to electronically focusing sound, the device stimulates your brain using transcranial electrostimulation. A low-level electrical signal tied to the audio input directly stimulates the auditory cortex of your brain and reportedly improves intelligibility.
Assistive technology is extremely fertile ground for hackers to make a difference, because of the unique requirements of each user and the high costs of commercial solutions. [Nick] has been working on Earswitch, an innovative assistive tech switch that can be actuated using voluntary movement of the middle ear muscle.
Most people don’t know they can contract their middle ear muscle, technically called the tensor tympani, but will recognise it as a rumbling sound or muffling effect of your hearing when yawning or tightly closing eyes. Its function is actually to protect your hearing from loud sounds screaming or chewing. [Nick] ran a survey and found that 75% can consciously contract the tensor tympani and 17% of can do it in isolation from other movements. Using a cheap USB auroscope (an ear camera like the one [Jenny] reviewed in November), he was able to detect the movement using iSpy, an open source software package meant for video surveillance. The output from iSpy is used to control Grid3, a commercial assistive technology software package. [Nick] also envisions the technology being used as a control interface for consumer electronics via earphones.
With the proof of concept done, [Nick] is looking at ways to make the tech more practical to actually use, possibly with a CMOS camera module inside a standard noise canceling headphones. Simpler optical sensors like reflectance or time-of-flight are also options being investigated. If you have suggestions for or possible use case, drop by on the project page.
Assistive tech always makes for interesting hacks. We recently saw a robotic arm that helps people feed themselves, and the 2017 Hackaday Prize has an entire stage that was focused on assistive technology.
We are swimming in radio transmissions from all around, and if you live above the ground floor, they are coming at you from below as well. Humans do not have a sensory organ for recognizing radio signals, but we have lots of hardware which can make sense of it. The chances are good that you are looking at one such device right now. [Frank Swain] has leaped from merely accepting the omnipresent signals from WiFi routers and portable devices to listening in on them. The audio signals are mere soundwaves, so he is not listening to every tweet and email password, merely a representation of the data’s presence. There is a sample below the break, and it sounds like a Geiger counter playing PIN•BOT.
We experience only the most minuscule sliver of information coming at us at any given moment. Machines to hack that gap are not had to find on these pages so [Frank] is in good company. Magnetosensory is a popular choice for people with a poor sense of direction. Echolocation is perfect for fans of Daredevil. Delivering new sensations could be easier than ever with high-resolution tactile displays. Detect some rather intimate data with ‘SHE BON.’
When auditory cells are modified to receive light, do you see sound, or hear light? To some trained gerbils at University Medical Center Göttingen, Germany under the care of [Tobias Moser], the question is moot. The gerbils were instructed to move to a different part of their cage when administrators played a sound, and when cochlear lights were activated on their modified cells, the gerbils obeyed their conditioning and went where they were supposed to go.
In the linked article, there is software which allows you to simulate what it is like to hear through a cochlear implant, or you can check out the video below the break which is not related to the article. Either way, improvements to the technology are welcome, and according to [Tobias]: “Optical stimulation may be the breakthrough to increase frequency resolution, and continue improving the cochlear implant”. The first cochlear implant was installed in 1964 so it has long history and a solid future.
This is not the only method for improving cochlear implants, and some don’t require any modified cells, but [Tobias] explained his reasoning. “I essentially took the harder route with optogenetics because it has a mechanism I understand,” and if that does not sound like so many hackers who reach for the tools they are familiar with, we don’t know what does. Revel in your Arduinos, 555 timers, transistors, or optogenetically modified cells, and know that your choice of tool is as powerful as the wielder.
The human auditory system is a complex and wonderful thing. One of its most useful features is the ability to estimate the range and direction of sound sources – think of the way people instinctively turn when hearing a sudden loud noise. A team of students have leveraged this innate ability to produce a game of tag based around nothing but sound.
The game runs on two FPGAs, which handle the processing and communication required. The chaser is given a screen upon which they can see their own location and that of their prey. The target has no vision at all, and must rely on the sounds in their stereo headphones to detect the location of the chaser and evade them as long as possible.
The project documentation goes into great detail about the specifics of the implementation. The game relies on the use of the Head Related Transfer Function – a function related to how the ear picks up sounds relative to their position. This allows the FPGA to simulate the chaser’s footsteps, and feed the audio to the target who perceives the chaser’s position purely by sound.
It’s a great example of a gameplay mechanic that we’d love to see developed further. The concept of trying to find one’s way around by hearing alone is one which we think holds a lot of promise.
With plenty of processing power under the hood, FPGAs are a great choice for complex audio projects. A great project to try might be decoding MP3s.