Shining A Light On Hearing Loss

When auditory cells are modified to receive light, do you see sound, or hear light? To some trained gerbils at University Medical Center Göttingen, Germany under the care of [Tobias Moser], the question is moot. The gerbils were instructed to move to a different part of their cage when administrators played a sound, and when cochlear lights were activated on their modified cells, the gerbils obeyed their conditioning and went where they were supposed to go.

In the linked article, there is software which allows you to simulate what it is like to hear through a cochlear implant, or you can check out the video below the break which is not related to the article. Either way, improvements to the technology are welcome, and according to [Tobias]: “Optical stimulation may be the breakthrough to increase frequency resolution, and continue improving the cochlear implant”. The first cochlear implant was installed in 1964 so it has long history and a solid future.

This is not the only method for improving cochlear implants, and some don’t require any modified cells, but [Tobias] explained his reasoning. “I essentially took the harder route with optogenetics because it has a mechanism I understand,” and if that does not sound like so many hackers who reach for the tools they are familiar with, we don’t know what does. Revel in your Arduinos, 555 timers, transistors, or optogenetically modified cells, and know that your choice of tool is as powerful as the wielder.

Optogenetics could become a hot ticket at bio maker spaces. We have talked about optogenetics in lab rodents before, but it also finds purchase in zebrafish and roundworm.

Continue reading “Shining A Light On Hearing Loss”

Run From The Sound Of Footsteps In Blind Game Of Tag

The human auditory system is a complex and wonderful thing. One of its most useful features is the ability to estimate the range and direction of sound sources – think of the way people instinctively turn when hearing a sudden loud noise. A team of students have leveraged this innate ability to produce a game of tag based around nothing but sound.

The game runs on two FPGAs, which handle the processing and communication required. The chaser is given a screen upon which they can see their own location and that of their prey. The target has no vision at all, and must rely on the sounds in their stereo headphones to detect the location of the chaser and evade them as long as possible.

The project documentation goes into great detail about the specifics of the implementation. The game relies on the use of the Head Related Transfer Function – a function related to how the ear picks up sounds relative to their position. This allows the FPGA to simulate the chaser’s footsteps, and feed the audio to the target who perceives the chaser’s position purely by sound.

It’s a great example of a gameplay mechanic that we’d love to see developed further. The concept of trying to find one’s way around by hearing alone is one which we think holds a lot of promise.

With plenty of processing power under the hood, FPGAs are a great choice for complex audio projects. A great project to try might be decoding MP3s.