
Source : ucsf
Researchers at the University of California, San Francisco have successfully developed a “speech neuroprosthesis“ that directly converts signals from the vocal tract in the brain into text that appears on the screen, enabling severely paralyzed people to communicate in sentences.
It was developed in collaboration with the first clinical trial participant. It is based on the efforts of Edward Chang, MD . The technology developed enables paralyzed people even in his research. If they can’t speak on this own. The study appears in the New England Journal of Medicine, July 15.
This is the first successful demonstration of a complete word that directly decodes the brain activity of a paralyzed and unable to speak.” said chang, Joan & Sanford Weill Chair of Neurological Surgery at UCSF , professor Jean Robertson. And the main author of the study. It shows strong promise to restore communication by tapping into brain natural speech machinery.
Every year, thousands of people lose their ability to speak due to stroke, accident or illness. As the method described in this study develops, it may one day allow these people to communicate adequately.
Converting brain signals to speech
Previous work in the field of communication neuroprosthesis focused on using spelling methods to write one letter at a time in the text to restore communication. Chang’s research is fundamentally different from these efforts: His team translates signals that control the muscles of the vocal system to speak, instead of moving hands or arms to write. Chang said that this method takes advantage of the naturalness and fluidity of language and is expected to achieve faster and more organic communication.
“With speech, we usually communicate information at very high speeds, 150-200 words per minute,” he said, noting that spelling based methods that use typing, writing, and cursor control are much slower and more laborious. “Outspoken speaking has a great advantage, as we do here, because it is closer to the way we usually speak.
The progress that Chang has made in the past ten years has been achieved by the University of California, San Francisco. With the support of the centre patients, they underwent neurosurgery and used electrode sets placed on the surface of the brain to determine the source of the seizures. These patients can speak normally and voluntarily record their brains. The initial success of these volunteer patients is ongoing The study of paralyzed patients paved the way.
Previously, chang and collagues in The Weill Institute of Neurobiology mapped the cortical activity patterns associated with the vocal tract movement produced by each consonant and vowel. To translate these results into spoken language recognition, de Dr. David Moses, a postdoctoral engineer in Chang’s laboratory and one of the leaders of the new research, developed new methods and statistical language models to decode these patterns in real time to improve accuracy.
But its success in decoding participant speech is not guaranteed. The technology is suitable for people whose vocal tract is paralyzed. “Our model must examine the mapping between complex patterns of brain activity and expected language,” Moses said. “This is a big problem when participants cannot speak.
In addition, the team does not know whether the brain signals that control the vocal tract will remain intact in people who have been unable to move the vocal cord muscles for many years. “find out it The best way to see if it is feasible is to try it,” Moses said.
First 50 Words
In order to explore the potential of this technology in paralyzed patients, Chang collaborated with colleagues and associate professor Karunesh Ganguly, MD and Ph.D. in neuroscience. A study called “BRAVO“ (Brain-to-Computer Hand and Speech Recovery). The first participant in the study was a 30-year-old man who suffered a devastating brainstem stroke more than 15 years ago, severely damaged The connection between the brain, vocal tract, and limbs. After the injury, his head, neck, and limbs are extremely limited, and he can only communicate using the pointer attached to the baseball cap to poke letters on screen.
Participant, Who asked to named as BRAVO1, a 50-word dictionary was created in collaboration with researchers. Chang’s team can use advanced computer algorithms to recognize brain activity. “—It is enough to write hundreds of sentences to express concepts applicable to BRAVO1’s daily life Up.
In this study, Chang surgically implanted a high-density electrode array on the BRAVO1 motor cortex. After a participant fully recovered, his team recorded 22 hours of neural activity in this area of the brain over 48 sessions and several months. When the electrodes record signals from the speech cortex of the brain, BRAVO1 attempt to say each of the 50 vocabulary words many times.
Converting Speech into Text
Translate patterns of the recorded neural activity patterns into specific target words, the other two main research authors, Sean Metzger, MS and Jesse Liu, BS, are both PhD students in Chang’s laboratory custom neural network models which are forms of AI (artificial intelligence). When participants try to speak, these networks recognize subtle patterns in brain activity to detect the attempt to speak and determine which words he was trying to say.
To test their method, the BRAVO1 team first showed short sentence constructed from 50 vocabulary words, and asked him to repeat it several times. When he tries, the words on the screen will be decoded one by one according to his brain activity.
Then, the team gave instructions and asked questions such as “How are you doing today?” And “Do you want some water? “As before, BRAVO1’s attempt attempt appeared on the screen. “I’m fine” and “No, I don’t want to drink.
The team found that the system can decode words at a rate of 18 words per minute based on brain activity, with an accuracy rate of 93% (75% on average). To drive its success, Moses introduced a language model that implements an “auto-correction” function similar to that used in consumer speech recognition and consumer texting.
Moses characterised the early test results proved the principle. “We are very happy to see accurate transcripts of many meaningful sentences,” he said. “We have proven that communication can indeed be promoted in this way, and it has potential in the future.
Chang and Moses stated that they will extend this process to more paralyzed and poorly communicated participants. Team is currently working to, to increase the number of available vocabularies. Word count and increase speaking speed.
Both said that although the study focused on a single participant and a limited vocabulary, these limitations did not reduce accomplishment. “This is an important technological milestone for people who are unable to communicate naturally,” Moses said. “It shows the potential of this method in dealing with the voices of severely paralyzed and aphasia patients.
The new study was published in the New England Journal of Medicine.