A team of University of Oregon researchers have isolated an independent processing channel of synapses inside the brain's auditory cortex that deals specifically with shutting off sound processing at appropriate times. Such regulation is vital for hearing and for understanding speech.
The discovery, detailed in the Feb. 11 issue of the journal Neuron, goes against a long-held assumption that the signaling of a sound's appearance and its subsequent disappearance are both handled by the same pathway.
The discovery, detailed in the Feb. 11 issue of the journal Neuron, goes against a long-held assumption that the signaling of a sound's appearance and its subsequent disappearance are both handled by the same pathway.
The new finding, which supports an emerging theory that a separate set of synapses is responsible, could lead to new, distinctly targeted therapies such as improved hearing devices, said Michael Wehr, a professor of psychology and member of the UO Institute of Neuroscience.
"It looks like there is a whole separate channel that goes all the way from the ear up to the brain that is specialized to process sound offsets," Wehr said. The two channels finally come together in a brain region called the auditory cortex, situated in the temporal lobe.
To do the research, Wehr and two UO undergraduate students - lead author Ben Scholl, now a graduate student at the Oregon Health and Science University in Portland, and Xiang Gao - monitored the activity of neurons and their connecting synapses as rats were exposed to millisecond bursts of tones, looking at the responses to both the start and end of a sound. They tested varying lengths and frequencies of sounds in a series of experiments.
It became clear, the researchers found, that one set of synapses responded "very strongly at the onset of sounds," but a different set of synapses responded to the sudden disappearance of sounds. There was no overlap of the two responding sets, the researchers noted. The end of one sound did not affect the response to a new sound, thus reinforcing the idea of separate processing channels.
The UO team also noted that responses to the end of a sound involved different frequency tuning, duration and amplitude than those involved in processing the start of a sound, findings that agree with a trend cited in at least three other studies in the last decade.
"Being able to perceive when sound stops is very important for speech processing," Wehr said. "One of the really hard problems in speech is finding the boundaries between the different parts of words. It is really not well understood how the brain does that."
As an example, he noted the difficulty some people have when they are at a noisy cocktail party and are trying to follow one conversation amid competing background noises. "We think that we've discovered brain mechanisms that are important in finding the necessary boundaries between words that help to allow for successful speech recognition and hearing," he said.
"It looks like there is a whole separate channel that goes all the way from the ear up to the brain that is specialized to process sound offsets," Wehr said. The two channels finally come together in a brain region called the auditory cortex, situated in the temporal lobe.
To do the research, Wehr and two UO undergraduate students - lead author Ben Scholl, now a graduate student at the Oregon Health and Science University in Portland, and Xiang Gao - monitored the activity of neurons and their connecting synapses as rats were exposed to millisecond bursts of tones, looking at the responses to both the start and end of a sound. They tested varying lengths and frequencies of sounds in a series of experiments.
It became clear, the researchers found, that one set of synapses responded "very strongly at the onset of sounds," but a different set of synapses responded to the sudden disappearance of sounds. There was no overlap of the two responding sets, the researchers noted. The end of one sound did not affect the response to a new sound, thus reinforcing the idea of separate processing channels.
The UO team also noted that responses to the end of a sound involved different frequency tuning, duration and amplitude than those involved in processing the start of a sound, findings that agree with a trend cited in at least three other studies in the last decade.
"Being able to perceive when sound stops is very important for speech processing," Wehr said. "One of the really hard problems in speech is finding the boundaries between the different parts of words. It is really not well understood how the brain does that."
As an example, he noted the difficulty some people have when they are at a noisy cocktail party and are trying to follow one conversation amid competing background noises. "We think that we've discovered brain mechanisms that are important in finding the necessary boundaries between words that help to allow for successful speech recognition and hearing," he said.
No comments:
Post a Comment