Speak Your Mind: Edward Chang Pioneers Device That Translates Brain Activity Into Speech


The Context: Patients suffering from stroke, paralysis, ALS, Parkinson’s, and many other conditions are often left with impaired speaking and gesturing abilities, and there are few available devices or methods that allow these patients to communicate quickly and effectively.

The Study: A new brain implant is translating neural activity into synthetic, fluent speech. The device was developed by NYSCF – Robertson Neuroscience Investigator and University of California, San Francisco Assistant Professor of Neurosurgery Dr. Edward Chang and his colleagues. Their study appears in Nature.

The Importance: By producing speech without requiring its host to move their muscles, this device could be advantageous for helping a variety of patients regain the ability to communicate efficiently.


For many patients suffering from a disease or injury that has left them with impaired speaking or gesturing abilities, communicating can be an arduous task.

Take Professor Stephen Hawking, for example. ALS left him unable to speak, so he would form his sentences letter by letter, choosing characters from a virtual keyboard he manipulated via a sensor that picked up movements of his cheek. Once a sentence was fully typed out, a synthetic voice would read it aloud. While this method was functional, it was also time consuming and inefficient.

Edward Chang, MD, and his colleagues have developed a new device that is overcoming such obstacles by directly translating brain activity into synthetic speech that is more fluent than ever before. Their study is published in Nature.

“We showed, by decoding the brain activity guiding articulation, we could simulate speech that is more accurate and natural sounding than synthesized speech based on extracting sound representations from the brain,” said Dr. Chang, an Assistant Professor of Neurosurgery at the University of California, San Francisco and a NYSCF – Robertson Neuroscience Investigator, in a New York Times article.

Dr. Chang and his colleagues fitted their device in five epilepsy patients who were already having electrodes placed in their brains to locate the source of their seizures (a preparation measure for surgery). These patients volunteered to have the device implanted in their brains to read signals from their speech centers.

Once the device picked up on the patients’ brain activity, it transmitted it to a computer. Then, the computer decoded the signals and turned them into sound.

“Our plan was to make essentially a vocal tract, a computer one, that users can animate using their brain to get speech out,” said Gopala Anumanchipalli, a speech scientist who led the research, in an article from The San Francisco Chronicle. “We can plug in this virtual vocal tract for them and have it speak for them.”

The patients each read hundreds of sentences out loud (or mimed the words with their mouths) while the device turned the firing of brain cells into spoken words. The resulting synthesized sentences sounded remarkably natural and intelligible, with listeners reporting that they could understand the speech about 70% of the time.

Before the device reaches patients, there is still much work to be done. However, researchers such as Josh Chartier (a co-first author on the study) are optimistic about the ability of this technology to improve many lives down the road.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said in an article from UCSF. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Read more from:


NBC News

Scientific American

Journal Citation:

Speech synthesis from neural decoding of spoken sentences.

Anumanchipalli GK, Chartier J, Chang EF. Nature. April 2019. doi: 10.1038/s41586-019-1119-1.

Diseases & Conditions:


People mentioned: