New Research Uncovers Neural Basis Behind Speech Production

News

A lot goes into the process of speaking. In addition to choosing our words, we also must also coordinate movements of over 100 muscles in our lips, jaw, and tongue to turn our thoughts into the intricately sculpted sound waves of speech.

New research from NYSCF – Robertson Neuroscience Investigator Edward Chang, MD, neurosurgeon, Chief of Epilepsy and Pain Neurosurgery, and Associate Professor at the University of California, San Francisco, is uncovering how our brain activity allows us to organize our speech. This could help inform the creation of prosthetic devices that rapidly translate thought into synthetic language to assist those who cannot speak.

The Study

Dr. Chang and his team assessed the brain regions involved in speech production using a method called electrocorticography (ECoG). In ECoG, a group of electrodes is positioned over the brain, allowing researchers to locate and track neuronal activity. For this study, five patients were fitted with ECoG electrodes over their ventral sensorimotor cortex (the brain’s speech production center) and were asked to read aloud a series of 460 sentences.

These sentences were constructed to provide a range of “coarticulation” possibilities. Coarticulation is the blending of phonemes (speech sounds) that leads to natural speech. Without it, our speech would sound choppy and incomprehensible.

While the team could not track the exact movements of the tongue, mouth, and larynx during the speech, they fed audio of the sentences into a deep learning algorithm that could approximate the distinct muscle movements made during articulation.

The Findings

The data showed that many different speech movements were encoded by neurons. In particular, four groups of neurons seemed to coordinate the movements of the lips, tongue, and throat to create the four main configurations of the vocal tract used in American English.

The researchers also found that as we speak, our neurons are very sensitive to the way we coarticulate words, positioning our lips and tongue to be ready for whatever syllable is up next. This suggests that our brains are attuned to produce fluid speech rather than a simple series of distinct phonemes.

The Possibility For A Prosthetic

The more we know about the neural activity taking place in the brain during speech, the better equipped we are to create prosthetic devices that can read that activity and create simulated speech. With insights into how our brains give rise to coarticulation, this simulated speech could sound more natural and fluid than ever.

For more information on this study, check out the paper in Neuron or this article from the University of California, San Francisco.

Diseases & Conditions:

Neurobiology, Neurotechnologies

People mentioned: