A breakthrough medical technology has empowered a severely paralyzed woman to communicate through a digital avatar by converting her brain signals into speech and facial expressions. The system, developed by researchers at UCSF and Berkeley Engineering, reads the blueprint of instructions the brain uses to generate speech sounds. This allows the user to silently think words and phrases, translated into speech and matched with a face that conveys emotions like surprise or sadness. This is the first time such an approach has been used in a person with paralysis, according to a new study published this week (Wednesday) in Nature. Co-lead authors Gopala Anumanchipalli, assistant professor, and Kaylo Littlejohn, a Ph.D. student in the lab of Dr. Edward Chang, both of the Department of Electrical Engineering and Computer Sciences at UCSF, spoke to Berkeley Engineering about this breakthrough.
In the study, Ann—a 47-year-old woman who suffered a brainstem stroke 18 years ago, terminating her ability to speak—agreed to have a paper-thin rectangle of 253 electrodes surgically implanted onto her cortex in areas critical for speech production. The electrodes intercepted the brain signals that would have controlled muscles in her tongue, jaw, larynx, and face if not for her stroke. A cable plugged into a port fixed to her head connected the electrodes to a bank of computers equipped with an artificial intelligence-powered system. Over the weeks, she worked with her team to train the device’s AI algorithms to recognize the unique brain signals for various speech sounds. This involved repeatedly repeating different phrases from a 1,024-word conversational vocabulary while attempting to speak them in her mind.
- RELATED STORY: Firsthand Experience: Driving the Production Candidate Cybertruck at Tesla Giga Texas
Once the algorithm learned to decode her signals, it could translate them into words, which were then interpreted by software that converted them into a synthesized voice and matched with animations of her mouth moving in various ways. The avatar—developed by a company called Speech Graphics—could even move its eyes and nose to show the emotion of her intended message.
Unlike previous brain-computer interfaces that have enabled people with ALS, a similar condition to Ann’s, to communicate using eye tracking or subtle facial movements, the UCSF team developed a system that decodes speech signals directly from the cortex. The team says this made it more accurate and three times faster than previous methods.
The team notes that this achievement offers hope to individuals who’ve lost their ability to communicate due to conditions like strokes or ALS, known as motor neuron disease. The researchers plan to launch a clinical trial next year to see patients with ALS, locked-in states, and other conditions participate in the research. The BRAVO device was funded by the National Institutes of Health and private donors. UCSF is also collaborating with other groups on new technologies to help restore speech and movement in people with severe spinal cord injuries or neurological disorders.