The Brain Has Its Own “Autofill” Function For Speech

The world is an unpredictable place. But the brain has evolved a way to cope with the everyday uncertainties it encounters—it doesn’t present us with many of them, but instead resolves them as a realistic model of the world. The body’s central controller predicts every contingency, using its stored database of past experiences, to minimize the element of surprise. Take vision, for example: We rarely see objects in their entirety but our brains fill in the gaps to make a best guess at what we are seeing—and these predictions are usually an accurate reflection of reality.

The same is true of hearing, and neuroscientists have now identified a predictive textlike brain mechanism that helps us to anticipate what is coming next when we hear someone speaking. The findings, published this week in PLoS Biology, advance our understanding of how the brain processes speech. They also provide clues about how language evolved, and could even lead to new ways of diagnosing a variety of neurological conditions more accurately.

The new study builds on earlier findings that monkeys and human infants can implicitly learn to recognize artificial grammar, or the rules by which sounds in a made-up language are related to one another. Neuroscientist Yukiko Kikuchi of Newcastle University in England and her colleagues played sequences of nonsense speech sounds to macaques and humans. Consistent with the earlier findings, Kikuchi and her team found both species quickly learned the rules of the language’s artificial grammar. After this initial learning period the researchers played more sound sequences—some of which violated the fabricated grammatical rules. They used microelectrodes to record responses from hundreds of individual neurons as well as from large populations of neurons that process sound information. In this way they were able to compare the responses with both types of sequences and determine the similarities between the two species’ reactions.

The scientists found the brain activity in this situation was remarkably similar in monkeys and humans, and that it varied according to the sequence of sounds in the fake sentences. In both species sentences that obeyed the grammar rules altered the firing of individual cells in the auditory cortex, so that the low- and high-frequency rhythmic activity produced by different populations of neurons became synchronized. Sentences that violated the grammatical rules initially produced a different response, but this was followed by the same synchronous pattern about half a second later, showing the “correct” sequences that had been learned earlier modulated the brain’s response to the ones that violated the rules. “We have discovered how individual neurons coordinate with neural populations to predict upcoming events,” Kikuchi says. “This occurs shortly before the neurons notice when an error has occurred, and the brain has to modify its predictions.” This research “could ultimately help people with problems predicting what will happen next,” she adds. “For example, we can now ask how these predictive responses might be malfunctioning in people suffering from disorders such as dyslexia, schizophrenia and attention-deficit hyperactivity disorder.”

The study is the first to directly compare humans’ and monkeys’ neural responses with complex sounds. As such, its results suggest both species use the same brain mechanisms to process such sounds—mechanisms that appear to have been conserved during the course of evolution. According to Sophie Scott, a cognitive neuroscientist who studies brain mechanisms of speech production at University College London and was not involved in the study, the new research provides important insights into the workings of the brain’s primary auditory cortex. “It’s showing that the cells are sensitive to the sequence of the sounds, or what you’re expecting to hear next,” she says. “They found that the signal is modulated by whether or not you’re getting an expected sequence, and this demonstrates very nicely that the auditory cortex is either learning about the sequences—or is receiving inputs about them from elsewhere and using that information.”

Scott adds it might prove difficult to apply the same methods of analysis to human speech, however, because the made-up language used in the study bears little resemblance to real speech. “The words led and let differ by the sounds at the end. But when I say one or the other, I produce the sound at the start differently, because I’m anticipating the end of the word,” Scott explains. “We use this information in speech but it’s missing from artificial grammar, which has long gaps between the words that are needed to examine how the neural oscillations change. That means [the researchers’ method of analysis] might not work if they look at real speech.”

 

*Originally published at Scientific American on April 28, 2017. © 2017 Mo Costandi. All rights reserved.


[whohit]brain-has-own-autofill-function[/whohit]