How is it possible to hear words in a crowded bar? Or on a busy street, or a construction site? UCSF scientists have released new study results examining how the human brain interprets speech in the midst of noisy environments.
The study’s participants were an unusual group: they were all epilepsy patients awaiting brain surgery, who agreed to have flexible panels containing 256 recording electrodes placed on the surface of their brains.
Through this study, scientists discovered that human brains have the capacity to “fill in” sections of speech that are inaudible. Until now this was interpreted as just an understanding of the English language (a real life Wheel of Fortune scenario), but in an unusual twist UCSF scientists also learned that a brain region separate from speech-processing areas somehow “predicts” which word someone will hear when that word is partially masked by noise, well before that noise has even begun to be processed by auditory areas.
“One of the oldest debates in the field is whether there’s a ‘top-down’ signal that actually changes the listener’s perception ‘online,’ in real time, or whether this is achieved by some sort of decision-making process that rapidly arrives at an interpretation after the missing sound segment has been processed,” said Matthew Leonard, PhD, assistant professor of neurological surgery. “Our data seem to support the former idea.”
One of the ways scientists deduced this is the study of the words “factor” and “faster.” If the word is unclear when spoken because of surrounding noise, the assumption is that people would fill it in based off of the context, such as “the car sped by faster than the bicycle.” But in UCSF’s study, a section towards the front of the brain activated about half a second before the part of the brain that would reference context. In their study, researchers found that the patients were just as likely to say they heard “factor,” despite the sentence’s context.
To summarize, the new study results show us that “there are brain mechanisms that are constantly working behind the scenes to make sure we don’t get tripped up every time there’s a sound that could prevent us from understanding speech,” says Leonard. “We don’t have a definitive idea of what this frontal signal is yet, but we’ll be exploring that question in future research.”
So next time you drastically misinterpret a word said to you in a crowded bar, you can blame the above mentioned frontal region of your brain—unnamed for now.