Strokes, brain injuries and neurological diseases rob thousands of people of their language skills. Scientists have now developed technology that can decipher words and phrases directly from brain waves in the cerebral cortex. This groundbreaking approach has the potential to further develop existing methods of assisted communication, improve communication skills, and improve the autonomy and quality of life of paralyzed patients who cannot speak.
Scientists at the University of California, San Francisco (UCSF) used this new technology on a man with severe paralysis to intercept signals from his brain to his vocal cords and translate those signals directly into sentences on a screen.
The results of this novel neuroprosthetic technology, led by Edward Chang, MD, UCSF neurosurgeon, appear in an article in the New England Journal of Medicine titled, “Neuroprosthesis for deciphering the language in a paralyzed person with anarthria”. ”This clinical study (NCT03698149) was funded by the National Institutes of Health, Philanthropy, and a sponsored research agreement with Facebook Reality Labs.
“To our knowledge, this is the first successful demonstration of the direct decoding of full words from the brain activity of someone who is paralyzed and unable to speak,” said Chang, the Joan and Sanford Weill Chair in Neurological Surgery at UCSF, awarded by Jeanne Robertson became professor and lead author on the study. “It makes a strong promise to restore communication by tapping into the brain’s natural language machinery.”
Work on communication neuroprosthetics has traditionally focused on restoring communication through spelling-based approaches to typing letters one at a time in text. Chang’s team takes a different approach, translating signals intended for the vocal muscles rather than signals intended for the arm or hand. The direct decoding of speech signals promises faster and more organic communication.
“With speech, we usually communicate information at a very high speed, up to 150 or 200 words per minute,” said Chang. Spelling-based approaches of typing, writing, and controlling a cursor are significantly slower and more cumbersome. “Going straight to words like we do here has great advantages because it is closer to our normal language.”
Studies of neurosurgical patients with normal speech suffering from seizures at the UCSF Epilepsy Center who have agreed to have their brain records analyzed for language-related activity were instrumental in the initial development of the technology and the current clinical study in patients with paralysis.
Chang and his colleagues mapped the cortical activity patterns associated with movements of the vocal tract that produce each consonant and vowel. David Moses, PhD, a postdoctoral fellow in the Chang Laboratory and lead author of the new study, developed new methods of real-time decoding of the patterns that incorporate statistical language models to improve the accuracy of the translation of cortical activity patterns into the ability to recognize complete words.
However, applying the method to patients with vocal cord paralysis posed a whole new set of challenges. “Our models had to learn the correlation between complex brain activity patterns and intended language,” said Moses. “That is a great challenge when the participant cannot speak.”
It was uncertain whether the brain generated signals to the vocal tract after years of paralysis of the vocal muscles. Chang has teamed up with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to start a study called BRAVO (Brain-Computer Interface Restoration of Arm and Voice) to explore the potential of this technology in patients Investigate paralysis.
A man in his late 30s who suffered a brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs is the first to participate in the study. The patient with severely restricted head, neck, and limb movements communicates by using a pointer attached to a baseball cap to point to letters on a screen.
The volunteer patient, who asks to be named “BRAVO1,” helped the researchers create a 50-word vocabulary that the team was able to identify in brain activity using advanced computer algorithms. Using words like “water,” “family,” and “good,” BRAVO1 created hundreds of sentences that expressed concepts relevant to his daily life. Chang surgically implanted a high density electrode array over the BRAVO1 speech motor cortex. After recovering from surgery, Chang’s team recorded 22 hours of neural activity over 48 sessions and many months. During each session, BRAVO1 tried to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from its speech cortex.
Co-lead authors Sean Metzger and Jessie Liu, bioengineering students at Chang Lab, used custom neural network models to translate the patterns of recorded neural activity into the specific words. The neural network was able to detect subtle changes in patterns of brain activity to detect language attempts and identify every word that BRAVO1 tried to say.
The researchers first asked BRAVO1 to repeat short sentences formed from the 50 vocabulary to test the approach. Then they asked him questions like “How are you today?” And “Would you like some water?” BRAVO1 tried to speak on the screen: “I’m very good” and “No, I’m not thirsty”.
The team measured that the system could decipher words from brain activity at speeds of up to 18 words per minute with an accuracy of up to 93%. An “autocorrect” function, as used in commercially available speech recognition software, contributed to the success of the technology.
“We were thrilled to see the exact decoding of a multitude of meaningful sentences,” said Moses. “We have shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational situations.”
Chang and Moses will expand the study to include other participants who are severely paralyzed and have communication disorders. The team is currently working on increasing vocabulary and speaking speed.
“This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it shows the potential of this approach to give voice to people with severe paralysis and speech loss.”