Thoughts to Text: Edward Chang’s Implant Turns Brain Signals to Written Words With Up to 97% Accuracy

News

The Context: There is an urgent need for new methods to allow effective communication for patients who have lost speaking or gesturing abilities to stroke, paralysis, ALS, Parkinson’s, and other conditions that affect the body’s muscles.

The Study:  NYSCF – Robertson Neuroscience Investigator Alumnus Edward Chang, MD, at the University of California, San Francisco, has developed an artificial intelligence – driven brain implant that can read signals from the brain’s speech center and turn them into written text with up to 97% accuracy. The study appears in Nature Neuroscience.

The Importance: This new method could serve as a way for patients who have lost speaking ability to communicate quickly and efficiently.


When a patient loses their ability to speak or gesture (which can result from stroke, paralysis, Parkinson’s, or other conditions affecting the muscles), they often turn to alternative means of communication. However, current options aren’t reliably accurate or fast, and patients are in dire need of methods that make communication as seamless as possible.

With the power of artificial intelligence, a new brain implant developed by NYSCF – Robertson Neuroscience Investigator Alumnus Edward Chang, MD, and colleagues at the University of California, San Francisco could help patients seamlessly translate brain activity into written text. In initial testing of the technology in stroke patients (outlined in Nature Neuroscience), the device could convert brain signals to text with up to 97% accuracy.

How it Works

The team analyzed four epilepsy patients who were already having electrodes placed in their brains to locate the source of their seizures (a preparation measure for surgery). These patients volunteered to have the device implanted in their brains to read signals from their speech centers.

Dr. Chang asked the patients to read sentences aloud. As they did so, the electrodes recorded activity in their speech centers and fed that information to neural networks (computer programs that model the nervous system) which used artificial intelligence to detect patterns associated with the words being uttered.

Then, the participants were asked to think of 30-40 words that they had previously appeared in the spoken sentences – to ‘say them in their heads’ rather than out loud.

The technology then analyzed their brain activity and made predictions as to which previously recorded words the participants were thinking about. For one patient, the technology performed with 97% accuracy, showing that the method could effectively translate brain activity into written words without the need for patients to speak aloud.

While the technology will still need to undergo further testing before it can be used by patients who have lost their speaking abilities, this study demonstrates the power of artificial intelligence for decoding signals from the brain and the potential of such methods for restoring efficient communication in patients.

Learn more about Dr. Chang’s work developing brain-controlled communication methods such as artificial vocal tracts.

Journal Article: 

Machine translation of cortical activity to text with an encoder–decoder framework
Joseph G. Makin, David A. Moses & Edward F. Chang. Nature Neuroscience. 2020. DOI: https://doi.org/10.1038/s41593-020-0608-8

Photo Credit: Noah Berger, University of California San Francisco

Diseases & Conditions:

Neurotechnologies