Wednesday, September 14, 2011
Why Current In-the-Ear Hearing Aids Fail
Clearly, today’s hearing aids are tiny, nearly invisible in fact, and they amplify sound and are able to present a higher range of frequencies, but they have not yet completely solved the problem of amplifying the peripheral sounds we just don’t want, or don't need to hear.
For new wearers the crumpling of a paper bag on the other side of a room can sound like a jackhammer.
This is a huge challenge for technology because it is dependent on how the brain perceives sound and how we have learned to filter peripheral sound out of normal hearing. Andrew J. Oxenham is a psychologist and hearing expert at the University of Minnesota and an expert in psychoacoustics.
Oxenham explains: The ear works by analysing sound and breaking it into different frequencies and with many forms of hearing impairment it’s this frequency selectivity that is impaired.
What that means is that the ear doesn’t filter as well as it did before. So instead of having very sharp tuning to filter out different frequencies the filtering becomes much broader and there is no real way of compensating for that.
You can’t sharpen the filters or you can’t pre-process sound so it’s sharp. It’s like a broken TV set. You can process the signal going into the TV as much as you like but you still won’t get a clear picture of the output.
Another big leap forward has been made with directional hearing. They can focus the microphones toward the front and filter out a lot of the sound coming from the side and back. And although that is a fairly simple technique, it involves signal processing that wasn’t possible with earlier hearing aids.
Ambient or peripheral sound is horribly distracting for hearing aid wearers. A paper bag being crumpled across a room sounds screechingly loud.
This is common complaint of people who recently start wearing a hearing aid. Their hearing has deteriorated, often without them being completely aware of it, over a period of time.
When they are suddenly fitted with a hearing aid, they hear sounds they’ve got used to not hearing. The sounds are suddenly annoying and distracting. It’s a contrast effect.
It’s more to do with perception i.e the brain’s ability to analyse and prioritise different sounds.
It’s a complex interaction between the ear and the brain. The ear sends signals up to the brain; the brain does an awful lot of processing on top of that; then sends signals back down to the ear. These signals change the way the ear accepts input.
This is partly why hearing aids are not perfect because the hearing aid is not part of that natural feedback loop. There’s no way with current aids that the brain can interface with a hearing aid directly to change its characteristics.
To deal with background noise there are things called “hearing loops.”
These are systems that are set up within places like concert halls and churches that interface directly with the hearing aid. It’s like sending a radio signal to the hearing device.
The idea is that this hearing loop picks up the sound directly from the microphone in front of a speaker.
If you are in a conference and the speaker is talking into a microphone. Normally we hear the sound acoustically through the airwaves.
If you are wearing a regular hearing aid the microphone will pick up the sounds on the airwaves but that is together with all the background noise and reverberation in the room.
With a hearing loop it sends the signal directly from the microphone to the ear and bypasses all the acoustics in the building itself. So the ear is getting a much better, clearer and cleaner signal of what’s coming into the microphone.
Two hearing aids better than One?
It’s only recently that people have routinely been fitted with two hearing aids. Often people only got one.
Directional hearing and the way we localise sound: To know where the sound is coming from the brain compares the signals coming into the two ears. So if it’s slightly louder on one side then the brain knows the sound is coming from that side.
More importantly it’s the time of arrival difference between the two ears. If you think about a sound coming from the right. The sound will reach your right ear a little bit before it reaches your left ear.
Although we are talking about millionths of seconds, your brain needs two ears to make a distinction. If you only have one you lose that ability to localise sound and tell which direction it is coming from.
It’s also an important part of filtering out sound and noise. The brain can determine if there is speech right in front and background noise in back of and to the side. The brain can use those differences in localisation to help to make the speech more intelligible.
So the biggest technical challenge is developing hearing aids that can focus on what we really need and want to listen to. This is the current problem.
We are hoping through even more sophisticated signal processing schemes that we’ll be able to work on artificial source segregation; i.e. analysing the signal that is coming in and figuring out what is speech and what isn’t, and only presenting to the ear the wanted signal.
Distinguishing between speech and noise
The assumption is that what you really want to listen to is speech, and so there are certain acoustical aspects of speech that we can recognise and there are certain acoustical aspects of noise that are different from speech.
So, we need to establish a suitable algorithm to be able to distinguish between speech and noise that will help you towards filtering the unwanted signal.
A more complete solutiion could mean that brain-computer interface may be part of the hearing aid systems of the future. Where the hearing aid is tapping into brain responses to pick up the specific signal the person wants to pay attention to.
This is an ongoing process with incremental steps and we will continue to see improvements over the next 15 years.